Re: European ISP enables IPv6 for all?
On Tue, 18 Dec 2007 12:14:52 +0100 Iljitsch van Beijnum [EMAIL PROTECTED] wrote: I'd say that the huge address space makes life impossible for scanning worms. Perhaps for random address scanning, but certainly not for scanning worms generally. In addition to the paper Steve Bellovin provided a link to, consider how many vulnerabilities are in the app compared to the stack and raw listeners these days. Miscreants know how to progamatically feed a list of search terms to the search engines and parse the results. There are a lot of vulnerable web apps out there and they are actively being scanned, tested and exploited. Plugging the words 'rfi' and 'scanner' into a search engine for further detail. John
Re: Book on Network Architecture and Design
On Mon, 03 Dec 2007 15:16:47 -0200 MARLON BORBA [EMAIL PROTECTED] wrote: I am in search of a good book about Network Architecture and Design, with emphasis in Quality of Service and convergent networks, to be used as a reference. Could you please indcate your favorites? Some might say those two mutually exclusive sets, but here are some of my favorite general reference networking books good for most any netop's library (in no particular order): An Engineering Approach to Computer Networks S. Keshav TCP/IP Illustrated: Volume I W. Richard Steves Internet Core Protocols Eric A. Hall Interconnections: 2nd Edition Radia Perlman Computer Networks: A Systems Approach Larry Peterson and Bruce Davie If you want something specific to vendor hardware and configurations I'm sure others will be happy to suggest their favorites, personally I don't generally find those sorts of texts as useful so I can't really speak to any of the multitude available. John
Re: Hey, SiteFinder is back, again...
On Sun, 4 Nov 2007 11:52:11 -0500 (EST) Sean Donelan [EMAIL PROTECTED] wrote: I just wish the IETF would acknowledge this and go ahead and define a DNS bit for artificial DNS answers for all these address correction and domain parking and domain tasting people to use for their keen Web 2.0 ideas. Yes, let's let the IETF go off for 7 years to debate and try to put into an RFC something else that won't actually be used. Sorry Sean, you've lost me on this one. :-) John
Re: Can P2P applications learn to play fair on networks?
On Thu, 25 Oct 2007 12:50:32 -0400 (EDT) Sean Donelan [EMAIL PROTECTED] wrote: Comcast's network is QOS DSCP enabled, as are many other large provider networks. Enterprise customers use QOS DSCP all the time. However, the net neutrality battles last year made it politically impossible for providers to say they use QOS in their consumer networks. re: http://www.merit.edu/mail.archives/nanog/2005-12/msg00334.html This came up before and I'll ask again, what do you mean by QoS? And what precisely does QoS DSCP really mean here? It's important to know what queueing, dropping, limiting, etc. policies and hardware/buffering capabilities are with the DSCP settings. Otherwise it's just a buzzword on a checklist that might not even actually do anything. I'd also like to hear about monitoring and management capabilities are deployed, that was a real problem last time I checked. How much has really changed? Do you (or if someone on these big nets wants to own up offlist) have pointers to indicate that deployment is significantly different now than they were a couple years ago? Even better, perhaps someone can do a preso at a future meeting on their recent deployment experience? I did one a couple years and I haven't heard of things improving markedly since then, but then I am still recovering from having drunk from that jug of kool-aid. :-) John
Re: large organization nameservers sending icmp packets to dns servers.
On Fri, 10 Aug 2007 16:11:04 -0700 Douglas Otis [EMAIL PROTECTED] wrote: TCP offers a means to escape UDP related issues. On the other hand, blocking TCP may offer the necessary motivation for having these UDP issues fixed. After all, only UDP should be required. When TCP is designed to readily fail, reliance upon TCP seems questionable. As DNSSEC in introduced, TCP could be relied upon in the growing number of instances where UDP is improperly handled. As a datapoint I ran some tests against a reasonably diverse and sizeable TLD zone I work with in another forum. I queried the name servers listed in the parent to see if I could successfuly query them for their corresponding domain name they are configured for using TCP. Out of about 9,300 unique name servers I failed to receive any answer from about 1700 of them. That is a bit more than an 18% failure rate. John
Re: IPv6 Advertisements
On Tue, 29 May 2007 15:08:34 + (GMT) Chris L. Morrow [EMAIL PROTECTED] wrote: vixie had a fun discussion about anycast and dns... something about him being sad/sorry about making everyone have to carry a /24 for f-root everywhere. I think there is a list of 'golden prefixes' or something, normally this is where Jeroen Masseur jumps in with GRH data and pointers. In lieu of missing protections for route hijacking there are arguments to be made for announcing more specifics. As will there be arguments over where that line should be drawn and who gets to draw it. :-) John
Re: Network end users to pull down 2 gigabytes a day, continuously?
On Tue, 9 Jan 2007 13:21:38 -0500 Marshall Eubanks [EMAIL PROTECTED] wrote: You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change. I am curious. Why do you think that ? I could have said the same thing, but with an opposite end meaning. You take one 10+ year technology with minimal deployment and put it on top of another 10+ year technology also far from being widely deployed and you end up with something quickly approaching zero deployment, instantly. :-) John
Re: How to pick a Site-Local Scope multi cast address
On Fri, 8 Dec 2006 09:54:03 -0600 Dave Raskin [EMAIL PROTECTED] wrote: Hello, I have been directed to this list by IANA when I asked the following question: An even better set of lists might be: https://www1.ietf.org/mailman/listinfo/mboned https://mail.internet2.edu/wws/info/wg-multicast There is some overlap between the two, but the former is probably the best place to start. Both are good lists that may be relevant for you to hang out in as they often cover the protocol and operational aspects you may want to follow. Both are low volume. My question is this: First, let me say... THANK YOU! Presuming you are a multicast app developer, you actually asked, terrific! Most don't and what ends up happening is growth in the multicast swamp, where site local apps like the one you're presuming working with end up leaking all over the place taking up valuable mcast router memory space and cpu time. Now, the bad news. How do I pick a group address within this range and not have a chance of colliding with some other application on the network already using the group address I just picked? Do I just randomly pick an address in that range and hope for the best? I am running on Windows and cannot assume that there is a MADCAP server available. You can probably never expect to find a MADCAP server. I don't think I've even ever heard of anyone deploying one, though I'm sure a handful have tried, I don't think it ever got much deployment outside a select few environments or the lab. IP multicast addressing has been a bit of a problem to say the least. A couple of documents to read might be: http://www.watersprings.org/pub/id/draft-ietf-mboned-addrarch-05.txt http://www.watersprings.org/pub/id/draft-ietf-mboned-addrdisc-problems-02.txt Then perhaps follow up on mboned if you still have questions. Some of the people that hang out there hang out here and may have more to say since I haven't been following closely what's going on for the past year. I don't think you're going to find the satisfying answer you were looking for, but that's IP multicast for you. John
Re: anycasting behind different ASNs?
On Wed, 06 Dec 2006 09:38:10 -0800 matthew zeier [EMAIL PROTECTED] wrote: Are there any practical issues with announcing the same route behind different ASNs? This is known as Multiple Origin AS of which you should be able to find plenty of discussion and articles about. It's not uncommon and as far as I know generally doesn't cause any operational problems in and of itself, though doing it should be well thought out and understood since depending on how things fit into the routing topology, packets may not flow as you expect. Shortly I'll have two seperate sites (EU, US) announcing their own space behind their own ASNs but have a desire to anycast a particular network out of both locations as well. In the talk on zonecheck.fr in reference to testing for authoritative DNS server set diversity at the OARC meeting, something similar to this came up: http://public.oarci.net/oarc/workshop-2006/agenda/ That was not part of the public portion, but the slides are available. Since I basically asked that question of the presenter when AS origin diversity was highlighted as one of the tests I'll summarize what I think is a reasonable concensus on the issue in that forum. Having a single origin ASes in the NS RRset may indicate insufficient network connectivity diversity. This is commonly the case where a single AS represents a network at a geographically isolated insitution. In this case it may be appropriate to house a server on another network prefix with a different origin AS and upstream connectivity. In the case of larger networks or anycast however, this may not be such a useful measure of diversity and in fact many large DNS service providers use a single origin AS for all their server instances. One might still argue in those cases that multiple origin ASes might help mitigate problematic local policy decisions such as load balancing that is done based on an ASN or perhaps due to incorrect AS path filters, but I think most would agree that in practice that is a pretty weak argument. John
Re: contacts at dlink and netgear?
On Fri, 17 Nov 2006 00:37:18 + (GMT) Chris L. Morrow [EMAIL PROTECTED] wrote: the wustl.edu folks probably have a good POC for atleast netgear... since they had to deal with the netgear 'ntp issue' 2+ years ago (and ongoing still). There was a nanog preso about it I think as well as many other news-ish items... I think you meant wisc.edu: http://www.cs.wisc.edu/~plonka/netgear-sntp/ John
Re: Router / Protocol Problem
On Thu, 7 Sep 2006 07:27:16 -0400 Mike Walter [EMAIL PROTECTED] wrote: Sep 7 06:50:20.697 EST: %SEC-6-IPACCESSLOGP: list 166 denied tcp 69.50.222.8(25) - 69.4.74.14(2421), 4 packets [...] I'm not very familiar with NBAR or how to use it for CodeRed, but this first rule: access-list 166 deny ip any any dscp 1 log Seems dubious. So I'm not not sure what sets the codepoint to 01 by default, but apparently CodeRed does? Nevertheless, this seems like a very weak basis for determining whether something is malicious. access-list 166 deny tcp any any eq 5554 access-list 166 deny tcp any any eq 9996 access-list 166 deny tcp any any eq 1025 access-list 166 deny udp any any eq 1434 You may realize this, but I bet some of the rules above I bet are matching on the occasional legitimate packets. Particular the last four rules above. In fact, I bet the rule that matches on TCP destination port 1025 probably has a lot of falsepositives. I'm not sure what you're trying to do with some of them, but if it is to stop some sort of worm, presumably you know that it will also stop applications that happen to choose those source ports. Windows hosts and apps will probably match the 1025 rule fairly frequently, UDP and NTP will match the UDP rule occasionally and various things will the others more or less frequently depending on what traverses your net. Now I have two questions. Is that not a good idea to have this on FE0/0 out? Second, why the heck would a smtp connection be matched via my http-hacks class-map? You don't show the interface config, but my guess is that the SMTP- looking packet may have originally had a codepoint of 1 and didn't really have anything to do with your policy-map. John
Re: mitigating botnet CCs has become useless
On Thu, 03 Aug 2006 12:22:31 -1000 Scott Weeks [EMAIL PROTECTED] wrote: But shutting them down, that's like the police arresting all the informants. It doesn't stop the crime, it just eradicates all your easy leads. What're folk's thoughts on that? Well that's one perspective. I love the bit about tagging the packets and using QoS (whatever that means) though, that would be a hoot. Keep in mind bots are not just for DoS. They spam, they capture keystrokes and mouseclicks, they can be proxies and so on. If in the name of botnets QoS gets widely deployed I'll put print out this email, puree it in a blender and humbly chug it down at a future NANOG. John
Re: Ultradns using anycast?
On Thu, 27 Jul 2006 12:01:19 -0500 Jeffrey Sharpe [EMAIL PROTECTED] wrote: Does anyone know if Ultradns uses anycast? Or how to get someone at UltraDNS or PIR to take ownership of a issue and resolve it? Anycast, yes. If you want to shoot me an email offline, myself or any one of the handful of my colleagues on this list can probably point you in the right direction if we can't help you directly. If you can forward me any prior communication you had attempted so I know where it was delivered, or supposed to be have been delivered to and what the problem is that would obviously be useful. John
Re: Best practices inquiry: filtering 128/1
On Mon, 10 Jul 2006 21:56:27 -0500 Jerry Pasker [EMAIL PROTECTED] wrote: Because you fear that their routers that distribute the feed could become own3d and used to cause a massive DoS by filtering out some networks? Someone in the NANOG community, I forget who now, had the sensible suggestion that you create a filter list based on the bogon list at the time you setup your feed. You use that to limit what you will accept from Cymru. Since bogon blocks will only get allocated, the worst that could happen is the breaking of a recently allocated bogon network. Even if you don't update your filter list for the next 5 years the damage is likely to be minimal. John
Re: Control Plane Policing
On Thu, 01 Jun 2006 12:07:00 +0200 hjan [EMAIL PROTECTED] wrote: I have read cisco's doc about cpp and i've also read the good documentation written by John Kristoff about cpp in wich are included some implementation example. The cisco-nsp mailing list is probably a better place for anything specific to Cisco's CoPP, but I'll quickly respond here, because the issue is general enough and others might be interested. You might be interested in reviewing a brief talk I did at the last Joint Techs. I went over some of the experiences and lessons learned: http://events.internet2.edu/2006/jt-albuquerque/sessionDetails.cfm?session=2444event=243 Note, the title is Tripping on QoS, but there is CoPP stuff in there. Unfortunately I don't think the session was audio or video recorded. A key point I'd like to make since I originally wrote that page is that it is quite difficult, and probably not the best approach, to use a control plane policy where you end up shovelling any unmatched stuff into a general rate limiter. Phil Rosenthal probably has the right idea to specifically pass things you know you want, maybe rate limiting them, but then have a default deny. access-list 168 permit icmp any loopback0 0.0.0.0 That doesn't look right. You do not need to specify a loopback address. By definition, the control plane policy will apply to any router interface, so perhaps you meant to say something like this: access-list 168 permit icmp any any Although I'm not sure I'd recommend doing what you're doing except for testing purposes. You have to think very carefully about what could happen when you start rate limiting protocols generally. For example, if something ICMP floods your router, will your network availability monitoring system's traffic get starved out? John
Re: Are botnets relevant to NANOG?
On Fri, 26 May 2006 10:21:10 -0700 Rick Wesson [EMAIL PROTECTED] wrote: lets see, should we be concerned? here are a few interesting tables, the cnt column is new IP addresses we have seen in the last 5 days. Hi Rick, What I'd be curious to know in the numbers being thrown around if there has been any accounting of transient address usage. Since I'm spending an awful lot of time with DNS these days, I'll actually provide a cite related to that (and not simply suggest you just quote me :-). See sections 3.3.2 and 4.4 of the following: Availability, Usage and Deployment Characteristics of the Domain Name System, Internet Measurement Conference 2004, J. Pang, et. al At some point transient address pools are limited and presumably so are the possible numbers of new bots, particularly within netblocks. Is there any accounting for that? Shouldn't there be? What will the effect of doing that be on the numbers? John
Re: Are botnets relevant to NANOG?
On Fri, 26 May 2006 11:50:21 -0700 Rick Wesson [EMAIL PROTECTED] wrote: The longer answer is that we haven't found a reliable way to identify dynamic blocks. Should anyone point me to an authoritative source I'd be happy to do the analysis and provide some graphs on how dynamic addresses effect the numbers. I don't know how effective the dynamic lists maintained by some in the anti-spamming community is, you'd probably know better than I, but that is one way as decribed in the paper. In the first section of the paper I cited they lists three methods they used to try to capture stable IP addresses. Summarizing those: 1. reverse map the IP address and analyze the hostname 2. do same for nearby addresses and analyze character difference ratio 3. compare active probes of suspect app with icmp echo response None of these will be foolproof and the last one will probably only be good for cases where there is a service running where'd you'd rather there not be and you can test for it (e.g. open relays). There was at least one additional reference to related work in that paper, which leads to more still, but I'll let those interested to do their own research on additional ideas for themselves. also note that we are using TCP fingerprinting in our spamtraps and expect to have some interesting results published in the august/sept time frame. We won't be able to say that a block is dynamic but we will be able to better understand if we talk to the same spammer from different ip addresses and how often those addresses change. Will look forward to seeing more. Thanks, John
Re: Determine difference between 2 BGP feeds
On Tue, 18 Apr 2006 16:13:12 -0400 (EDT) Scott Tuc Ellentuch at T-B-O-H [EMAIL PROTECTED] wrote: Is there a utility that I can use that will pull the routes off each router (Foundry preferred), and then compare them as best it can to see why there is such a difference? I don't know anything about foundry, but if you can simply display the routing table from a terminal, you can go the hacky unix cli tool way. For example, use 'script' to log your terminal session to a file, then presuming you can show the route table and each route includes a 'via upstream-address-line' line for each route (completely untested and I'm sure someone could come up with something much simpler and better): grep 'via upstream?' script upstream? perl -ne 'print $1\n if /(\d{1,3}(?:\.\d{1,3}){3}\/\d{1,3})/' upstream? | sort upstream?.sored comm -23 upstream1.txt upstream2.txt comm -13 upstream1.txt upstream2.txt John
Re: Rate-Limiting.
On Thu, 30 Mar 2006 15:56:02 -0800 Robert Sherrard [EMAIL PROTECTED] wrote: I've got a situation in which I'd like to rate limit a few servers that hang off of my 6590's... it appears that this can only be done on a layer 3 interface. These servers however aren't, they simply on a switch port / access. Aside from hard setting the l2 interface to 10mbit, can anyone think of another creative way to do this? Is one option moving these servers into a separate VLAN, then rate-limiting from there? Is rate limiting by source IP address an acceptable to you? If so, then you could do it that way. An untested example that should set you out in search of the necessary doc: class-map match-all cm-src-specific match access-group name acl-src-specific ! interface Vlan99 service-policy input sp-rate-limit ! policy-map sp-rate-limit class cm-src-specific police flow mask src-only 100 4000 conform-action transmit exceed-action drop ! ip access-list extended acl-src-specific permit ip any any John
Re: Rate-Limiting.
On Thu, 30 Mar 2006 17:25:38 -0800 Robert Sherrard [EMAIL PROTECTED] wrote: I'm really interested in rate limiting outbound... with many unknown dest IP's. That's what that example was intending to show. That is, rate limiting traffic coming from the servers into the VLAN interface towards the rest of the internetwork on the other side. Don't let the term 'input' fool you. If what you meant was to rate limit traffic to those servers, then I am afraid I can't help you. You could technically do that, but it is probably not of much value to any decent server implementation. John
Re: do bogon filters still help?
On Wed, 11 Jan 2006 13:03:51 -0500 Steven M. Bellovin [EMAIL PROTECTED] wrote: Every time IANA allocates new prefixes, we're treated to complaints about sites that are not reachable because they're in the new space and some places haven't updated their bogon filters. My question is this: have we reached a point where the bogon filters are causing more pain than they're worth? Perhaps operators can be convinced that the only best practice implementation of bogon filtering is through the use of a well maintained bogon route server service, be it from Team Cymru or some other well regarded 3rd party. All static, manual config management of bogon routes should be strongly discouraged. Now if router vendors could figure out ways to use a bogon route server for multicast protocols, that would be of a great help to niche community that has to run that service. There the pain is arguably worth it (dig about multicast being painful with or without them here :-) John
Re: The Qos PipeDream [Was: RE: Two Tiered Internet]
On Thu, 15 Dec 2005 19:15:49 -0500 (EST) Sean Donelan [EMAIL PROTECTED] wrote: ATT, Global Crossing, Level3, MCI, Savvis, Sprint, etc have sold QOS services for years. Level3 says 20% of the traffic over its What do they mean by QoS? Is it IntServ, DiffServ, PVCs, the law of averages or something else? I've had to deploy it on a campus network and in doing so it seems like I've tread into territory where few if any big networks are to be found. Nortel apparently removed DiffServ capability for their ISP customers from one of their VoIP product offerings specifically because the customers didn't want it. My impression is that DiffServ is not used by those types of networks you mentioned, but I'd be interested to hear that I'm mistaken. backbone is better than Best-Effort. Ok, maybe they aren't the Internet. Internet2 gave up on premium QOS and deployed less-than Best Effort scavenger class. Ok, may they aren't the Internet either. Scavenger is not currently enabled on Abielene. In fact, no QoS mechanisms are. On the other hand, those same QOS tools are very useful to the network engineer for managing all sorts of network problems such as DOS attacks and disaster recovery as well as more efficiently using all the available network paths. In my experience that is easier said than done. However, you remind me of what I think is what most who say they want QoS are really after. DoS protection. By focusing on DoS mitigation instead of trying to provide service differentiation, things begin to make more sense and actually become much more practical and deployable. John
Re: The Qos PipeDream [Was: RE: Two Tiered Internet]
On Fri, 16 Dec 2005 03:29:29 + (GMT) Christopher L. Morrow [EMAIL PROTECTED] wrote: In my experience that is easier said than done. However, you remind me of what I think is what most who say they want QoS are really after. DoS protection. By focusing on DoS mitigation instead of trying to provide service differentiation, things begin to make more sense and actually become much more practical and deployable. how does qos help with a dos attack? My point is that it's not QoS, it's DoS mitigation. Whatever that means to you, that is the solution I think most people may ultimately be looking for when they say they want QoS. John
NANOG 35 PGP keyring
Joe Abley is coordinating a set of PGP key signing parties throughout the NANOG 35 meeting. I know Joe has his hands full with program and steering committee responsibilities and could use help from others to ensure keysignings go smoothly. If you'll be attending any part of the meeting, have a PGP key and are interested in exchanging signatures with other attendees the least you should do is add your public key to the keyring located on Biglumber: https://www.biglumber.com/x/web?keyring=9445 If you already have a login to biglumber then you probably know what to do, otherwise, just paste a copy of your public key in the text box and click the submit button. Look for keysignings during the last 15 minutes of morning and lunch breaks as was done in Seattle. For additional information, the NANOG 35 PGP keysigning page is here: http://www.nanog.org/pgp.abley.html Joe's Seattle meeting presentation detailing the NANOG process: http://www.nanog.org/mtg-0505/abley.trust.html I'm going to try to be one of the folks that attends most if not all the keysignings, but having others do so and ensuring a hard copy of the keyring is available for each would be great. John
Re: Nuclear survivability (was: Cogent/Level 3 depeering)
On Thu, 6 Oct 2005 11:54:34 +0100 [EMAIL PROTECTED] wrote: While I realize that the nuke survivable thing is probably an old wives tale, it seems ridiculous that the Internet can't adjust by [...] It's not a myth. If the Internet were running RIP instead of BGP For the Internet, I believe it was indeed a myth. I wasn't there, but according to someone who was: http://www.postel.org/pipermail/end2end-interest/2004-April/003940.html John
Re: commonly blocked ISP ports
On Thu, 15 Sep 2005 10:29:27 +0300 Kim Onnel [EMAIL PROTECTED] wrote: 80 deny udp any any eq 1026 (3481591 matches) If you don't already know, it might be worth looking at a detailed breakdown of the source ports hitting that rule. It may be blocking a good amount of DNS and NTP traffic for instance. If that is the case, what you may find an acceptable alternative is to preface it with rules like this so at at least your recursive DNS servers will not have to maintain the recursive query in memory until it times out and your time servers don't miss a poll: permit udp any eq 53 host [recursive-dns-server-address] eq 1026 permit udp any eq 123 host [time-server-address] eq 1026 If a larger population of hosts are doing DNS then you'll have to decide whether or how to open it further or accept occasional failures. Note, in my experience, many of the Windows-based worms tend to use a source port 1023, so while this opens an even bigger hole, you could allow through all src ports 1024, which should create less breakage. You filtering policy and security stance may not permit the trade-off of course, but it's another option I've seen used. John
Re: NANOG as the Internet government?
On Tue, 30 Aug 2005 14:14:52 -0400 (EDT) J. Oquendo [EMAIL PROTECTED] wrote: Ten Commandments of the Interweb I'm biased, but I think these are better and less contestable: 1. Thou shalt above all, maintain the integrity of the network. 2. Thou shalt have a long term strategic direction. 3. Thou shalt always opt for quality before expediency. 4. Thou shalt meet the requirements, exceed the expectations and anticipate the needs of users. 5. Thou shalt benefit from a successful implementation by careful project planning. 6. Thou shalt provide reliability, availability and serviceability. 7. Thou shalt maintain detailed, timely and accurate documentation. 8. Thou shalt commit to continuous training. 9. Thou shalt test in a test environment. 10. Thou shalt install and label cables properly. They're about 10 years old now and seem to still hold up pretty well. John
Re: VOIP provider
On Wed, 3 Aug 2005 02:08:30 -0700 (PDT) Bill Woodcock [EMAIL PROTECTED] wrote: What security risk does TFTP pose that isn't also shared by HTTP? I find it disappointing that the filtering police rarely stop to think about their decision about what and why protocols are a security risk. Looked at in one way, TFTP could more secure than many alternatives. A TFTP implementation (e.g. the code required) can be much simpler, which is typically an advantage from a security perspective. If file authenticity (or even encryption) is required, simple end system mechanisms can be applied before and after transmitting the file. For applications such as device bootstrapping that deploy some additional checks on the file transferred, TFTP is probably a perfectly reasonable option. If it weren't for the 2 byte block code limit, it might be even more widely used for this purpose. John
Re: Request for Peering with AS4788 at Equinix SJO/ASH/LA
On Thu, 7 Jul 2005 12:10:46 -0500 Jason Sloderbeck [EMAIL PROTECTED] wrote: we're not a provider of transit. I have no desire to find new peers, so I'm not considering the offer below -- just wondering if this is a red flag that's worth passing on. Probably not. When I was at DePaul and when we connected to AADS I sent a lot of those types of emails to whoever the contacts were that were also connected there. Those that had open peering policies would often email back that they had already configured something and were ready when I was. Some, probably chuckling, never responded and some kindly sent back the equivalent of the Here is a link to our peering policy, if you can meet these requirements then sure (but we both know you can't :-) standard reply. Some organizations find it beneficial to peer with as many people as possible at the exchanges. The NANOG Peering BoF uses a partially derisive, but mostly humor intended term for these folks. John
Re: Fundamental changes to Internet architecture
On Fri, 1 Jul 2005 12:53:53 GMT Fergie (Paul Ferguson) [EMAIL PROTECTED] wrote: With all respect to Dave, and not to sound too skeptical, but we're pretty far along in our current architecture to fundamentally change, don't you think (emphasis on fundamentally)? From the article it seems clear that the focus is on 'new', not 'changed'. No need (and probably little likelihood now) to change this architecture if you don't want to, but a new architecture may come along that make this one seem quite outmoded. I'm skeptical about something truly new coming from this specific project, but I hope it comes from somewhere. With any luck someday we'll be referred to as those 'old interphants'. :-) John
Re: Blocking port udp/tcp 1433/1434
On Thu, 12 May 2005 04:15:07 -1000 Brian Russo [EMAIL PROTECTED] wrote: Perhaps a better question is: Is there now justification for allowing transit for ms-sql slammer ports? I think there always has been some justification. Here is a very small sample of real traffic that I can assure is not Slammer traffic, but it is being filtered nonetheless (IP addresses removed): May 12 09:15:30.598 CDT[...] denied udp removed(53) - removed(1434), 1 packet May 12 09:26:30.210 CDT[...] denied tcp removed(80) - removed(1434), 1 packet May 12 09:32:23.122 CDT[...] denied tcp removed(80) - removed(1434), 1 packet May 12 09:42:38.558 CDT[...] denied udp removed(123) - removed(123), 1 packet May 12 10:12:50.422 CDT[...] denied udp removed(53) - removed(1434), 1 packet Some have suggested adjusting filters so that the src port is 1023, which may be somewhat less harmful, but then others may object to this being an unacceptable hole. You can design networks, educate people, build tools, and write secure software to deal with all of the security problems, which will be very expensive and slow or you can count down from 2^320 til you approach 0, perhaps in large jumps, which is the way of the IP/TCP packet filters. That might be just as slow and expensive, but unfortunately results in complete dismantling of the system. Perhaps there are better alternatives, but I think they probably fall in between those two. John
Re: BCP for ISP to block worms at PEs and NAS
On Sun, 17 Apr 2005 13:28:21 +0200 Kim Onnel [EMAIL PROTECTED] wrote: I have the ACL below applied on many network devices to block the common worms ports, Beware, you are guaranteed to be blocking other, legitimate things too with some of these rules. More below. ip access-list extended worms deny tcp any any eq 5554 Whatever worm you're trying to mitigate above (sasser?), you will also be occasionally be taking out TCP sessions that happen to be using that port. Most commonly where one side uses 5554 as it's ephemeral port. deny tcp any any range 135 139 deny udp any any range 135 netbios-ss deny tcp any any eq 445 deny udp any any eq 1026 Similar as before, you are going to be removing some legitimate traffic. With UDP ephemeral ports this may most likely be DNS and NTP traffic. Note, many people do what you do all the time to the detriment of both real security and robustness in my opinion, but it's your net and you can throw away random packets if you want to. Perhaps set the rules to permit and log first, let it run for awhile and then see what you'll be missing. John
Re: BCP for ISP to block worms at PEs and NAS
On Sun, 17 Apr 2005 13:00:30 -0700 J.D. Falk [EMAIL PROTECTED] wrote: deny udp any any eq 1026 Similar as before, you are going to be removing some legitimate traffic. Is this really true? All of the ports listed above are used by LAN protocols that were never intended to communicate directly across backbone networks -- that's why VPNs were invented. I was speaking to the last UDP rule as shown above, but a port number is becoming increasingly more ambiguous as applications adapt when specific ports are filtered. There is also the idea of a 'port switching' process. Find an archived copy of draft-shepard-tcp-reassign-port-number for an example. Or even consider how TFTP works (port 69 is only in use for the initial packet to the TFTP server). Such a process actually has two 'good' properties, that are often add odds in many deployments. One is to foster transparency back into the network and the other is to improve resiliency from attackers attempting to insert spoofed packets into the communications. John
Re: Yes, I realize it's April Fools Day, but... (was: Cisco to merge with Nabisco)
On Fri, 1 Apr 2005 15:02:06 -0500 Joe Provo [EMAIL PROTECTED] wrote: I have as much humour as the next guy, but [insert renewed call for nanog-chat or nanog-social or whatever would keep the chitchat in a different blasted bucket]. Heck, if this is the general bucket than and nanog-linkexchange :-) Sometimes I think people post links so they can incite flame wars without actually looking like a flame war participant. John
Re: MD5 for TCP/BGP Sessions
On Wed, 30 Mar 2005 16:50:38 +0100 Doug Legge [EMAIL PROTECTED] wrote: What has been the general effect in the ISP/Enterprise community following the warnings? - Have people applied MD5? Without question more BGP sessions suddenly became 'MD5-enabled' across the net. It has been debated whether this was a necessary or even if it was a good thing. You should find some references, including some on this list where BGP peer sessions were being reconfigured with MD5 applied during the last TCP sequence number scare. - If not what other technologies were implemented (IPSec AH transport mode for BGP sessions/ACL/rate limiting etc)? I don't know of any widespread use of IPsec for BGP sessions even after that last round of alerts, but I am sure some exists. I would be interested in hearing from those that have implemented it in production. ACLs are often used, but vary widely depending on organization. It can be difficult to manage ACLs on a box with a large number of peers that uses many local BGP peering addresses. I'm sure some organizations reviewed and updated their ACLs as a result of the last scare, but that is a local, private decision and it would probably be hard to get good sample of who and what changed. - Has there been any performance impacts seen since implementation? Not real world cases that I've heard, but I believe a number of sites prefer not implement MD5 in part because of the potential performance/DoS issues with it enabled. - Has the support of the BGP environment been increased because of this implementation (What policies regards changing the MD5 keys were implemented)? Not in my case. We use a simple algorithm to come up with the shared secret, then document it in our peering contact database, which is in a secure, internal location that we can reference if we ever need it. In our case it is just relatively simple additional step when configuring or reconfiguring a BGP session. Although I have seen some compatibility issues between platforms. For example, relatively long length passphrases were not properly supported. In my experience, I haven't seen much practice of changing MD5 keys on BGP sessions except when an organization makes major changes or hasn't kept a record of the shared secret during changes. That is probably the most common time it will get changed. I suppose some organizations may change it when employees who knew it leave the organization, but I've not seen much evidence of that. - Was this seen as a valid fix or a knee-jerk reaction (Having re-read the exchanges on NANOG regards the actual mathematical probability of generating this attack, what did the ISP community actually do (compared to what the academic/vendor community were suggesting)? I think that has probably been discussed enough already and will probably be again now so I'll leave it to others to re-hash that. Do note that at least a two specific and related solutions have appeared in the last few years. One is the Generalized TTL Security Mechanism (GTSM) as defined in RFC 3682. It was originally written with BGP in mind, but is also useful for things like MSDP peering. See the RFC for details and why this might be used on BGP sessions. Another is smooth transition between shared secret changes or when applying authentication where none existed. I don't have references handy, but I seem to recall this was still vendor-specific and not fully implemented. Perhaps others will step in with updated info. MD5 can mitigate a risk, but it can come with some operational costs. Some operators prefer one side of the risk equation over the other. Some place a higher weight on one side of the equation than the other depending on the organization and the network. In my experience most will do MD5 if asked and only a small number would actually refuse. Whilst I've had some response from bgp-info and bgp-security, it's not really been sufficient to draw any real conclusions. From your knowledge and experience are you aware, either internally or with customers the take up of MD5 implementations and had anyone actually suffered an attack prior to implementation Not that I'm aware of, but I've almost always used it and other knobs when I could so maybe I just didn't notice? John
Re: IRC bots...
On Sat, 12 Mar 2005 17:09:17 -0800 (PST) Bill Nash [EMAIL PROTECTED] wrote: As popular as instant messenger, and increasingly, voip toys, have become, actual IRC usages represents a diminishing percentage of inter-user chatter. Even something as simple as carving irc usage out of your netflow records and tagging specific endpoints as potential sources is a piece of automation that will save you some time down the road. A decent network inventory would facilitate this. While most IRC traffic, even much of the so called 'bad' IRC traffic uses TCP 6667, IRC traffic that doesn't is not easily discerned through traffic flows except for perhaps with a pre-defined list of addresses and ports to seed monitoring with. Tallying then just the TCP 6667 traffic, perhaps eliminating very short lived or small flows, should be a good indicator of IRC traffic usage, but tagging those as potential sources for problems may be difficult. Perhaps in environments where IRC as an application is strictly forbidden or blocked this will work well, but on more open and larger network this may waste time, not save it. Since in the latter case, figuring out what is legit and what is not will likely be a lot of leg work. You can automate some of this further, by building white lists or black lists of IRC server addresses. A white list doesn't tend to scale very well. A black list scales better, but you have to get those black listed addresses and doing that part is harder. There are some people/groups who spend time finding black list hosts so leveraging their data can be very useful and time saving. John
Re: Vonage complains about VoIP-blocking
On Tue, 15 Feb 2005 16:18:01 -0500 Daniel Golding [EMAIL PROTECTED] wrote: Why block TFTP at your borders? To keep people from loading new versions of IOS on your routers? ;) Fear. Not trying to be flippant, but what's the basis for this? In addition to what others have said. The T in TFTP and the use of UDP is a clue as to why you'd want to use TFTP. It's relatively light weight and relatively simple to implemented in a small platform with limited resources. It is not required to run TCP after all. It could be possible to build a relatively trustworthy TFTP process without having to expose the device to TCP-based processes that typically get used for SSH or HTTPS, Since the TCP-based methods tend to contain more code and thus more complex, vulnerabilities may be more likely. I'll also point that implementations will use port 69 in a single packet, the one from the client initially the write or read. That means if you really must filter, you might be able to get away with filtering the destination port in a particular direction that is most dangerous for you. John
Re: Smallest Transit MTU
On Thu, 30 Dec 2004 01:00:22 -0500 Robert E.Seastrom [EMAIL PROTECTED] wrote: A naive reader might think from Dan's posting that the Internet didn't work at all before ECN was codified (experimental with RFC 2481 in January 1999 and standards-track with RFC 3168 in September 2001). [...] ECN has always looked to me like a solution in search of a problem, which may be why so few people have their panties in a bunch over non-support of it. It's not just that ECN isn't supported that is the problem, it's when systems by default reject packets with reserved bits set. While you may pan ECN, it or something else that might enhance Internet protocols like it in the future should typically be silently ignored by end hosts that don't understand them so those experiments can at least take place. ftp://ftp.rfc-editor.org/in-notes/bcp/bcp60.txt I also suspect that a very small population of users who would benefit from ECN are hanging out in places like NANOG so your view of ECN desirability may be limited. John
Re: Smallest Transit MTU
On Thu, 30 Dec 2004 17:42:44 -0800 David Schwartz [EMAIL PROTECTED] wrote: I, for one, do not agree. End hosts and firewalls *should* reject all traffic they don't understand. It's precisely to prevent our unintentional participation (as end hosts) in such 'experiments' that we deploy such filters. The problem is when the policies are not maintained (or are [...] If everyone actually did that, it would make upgrades to lots of things very interesting. We'd have to rely on the initial design and implementation being close to or at perfection for now and long into the future. If you do not upgrade or configure your systems to understand the new use of previously reserved bits then in the typical case you would silently ignore those bits and things would just continue to work in the way you were used to. Most people designing ways to make use of reserved bits in Internet protocols these days I think understand backwards compatibility is often a requirement. I think you may be fearful that the use of reserved bits introduces a new security risk, because of something a system may do in response to the use of those new fields. That is a very legitimate concern and a very real potential risk. I guess in my view of the world, in practical terms, we're not likely to see an experimental protocol start getting widely deployed and then suddenly discover that we have a major security threat on our hands that we cannot easily fix before it brings the net to a complete halt. At least not since the publication of RFC 793. :-) I think the concept of reserved fields is a relatively well accepted practice in computing by now. Security is important, but we cannot allow security concerns to completely halt progress. It just may be in the interest of security to allow this kind of experimentation to occur. IMO, it's negligent to configure a firewall to pass traffic whose meaning is not known. That means no end host to end host encryption that the network firewall cannot understand. ...and for anyone else who likes to block unknown bits, then don't let me see or hear you complain about how the net sucks, because you are not letting it evolve so that it can be fixed. :-) John
Re: Smallest Transit MTU
On Fri, 31 Dec 2004 01:51:01 -0500 Robert E.Seastrom [EMAIL PROTECTED] wrote: You must not remember how SunOS 4 responded when handed icmp echo requests with the record-route option set (passed the packet on for the next guy to enjoy and then promptly paniced). [...] Now I know wide deployment of IPv6 is in jeopardy. If using 2 reserved bits in a TCP header causes this kind of fear, imagine the resistance IPv6 and it's redefinition of 20 bytes plus an addition of 20 has yet to see. John
Re: Anycast 101
On Mon, 20 Dec 2004 17:18:30 + Paul Vixie [EMAIL PROTECTED] wrote: there are some million-bot drone armies out there. with enough attackers I've heard that claim before, but I've yet to be convinced that those making it were doing more than speculating. It is not unreasonable to believe there are millions of bot drones, but that is not the same as an army under a single or even coordinated control structure. It is entirely possible to build armies of that size, but maintaining them over any length of time is probably quite difficult. I'd of course be interested to hear about any evidence to the contrary on or off list. John
Re: is reverse dns required? (policy question)
On Wed, 01 Dec 2004 08:56:23 -0800 Greg Albrecht [EMAIL PROTECTED] wrote: are we obligated, as a user of ARIN ip space, or per some BCP, to provide ad-hoc reverse dns to our customers with-out cost, or without financial obligation. I thought I saw some 'MUST' statements in an RFC about this in the past, but I can't find anything at the moment. My guess is that you are not obligated to really do anything for free. Market pressure may be the deterrent in charging for each basic service request. You might argue to provide the option of delegating the reverse zones for your customers to self-manage without incurring any additional cost. This may not work well in your specific environment however. John
Re: BBC does IPv6 ;) (Was: large multi-site enterprises and PI
On Sat, 27 Nov 2004 18:25:52 +0100 Iljitsch van Beijnum [EMAIL PROTECTED] wrote: All I hear is how this company or that enterprise should qualify for PI space. What I don't hear is what's going to happen when the routing tables grow too large, or how to prevent this. I think just about anyone should qualify, but ONLY if there is some form of aggregation possible. PI in IPv6 without aggregation would be a bigger mistake than all other IPv6 mistakes so far. The entire Internet routing table doesn't have to be centralized in the core and it doesn't even have to be done by what are now called routers. While most will instantly pronounce it as unworkable without even trying, source routing or routing at another layer is an alternative way at dealing with this problem. Excuse me while I quote out of order, you say: While IPv6 is still IP, it's not just IPv4 with bigger addresses. We have 128 bits, so we should make good use of them. One way to do this Should IPv6 routing just be IPv4 routing with bigger addresses? John
Low latency forwarding failure detection
Not receiving any response for over a week after posting this query to cisco-nsp I thought perhaps folks here might have some input. In my scenario, Cisco is the likely gear involved, but even if people have vendor neutral feedback about this I'd be interesting in hearing it. From: John Kristoff [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Low latency forwarding failure detection Date: Tue, 26 Oct 2004 17:14:57 -0500 X-Mailer: Sylpheed-Claws 0.9.12 (GTK+ 1.2.10; i686-pc-linux-gnu) I've got a situation where something like HSRP seems appropriate for a redundant default gateway configuration. However, this application will want very low latency in finding and using the alternative gateway. Note, while the hosts have two NICs, they are both on the same subnet with one interface the default source and sink as long as it has link. I don't get to change this behavior. Default HSRP failure detection time however is likely not quick enough to bring a standby interface up to get traffic moving again. I see that HSRP provides for hello and hold times in milliseconds. I have a few questions for people who may have had a need to get very low latency recovery of links and routers. Have you used HSRP to do this? On a typical local ethernet (gig) LAN configuration, what sorts of latencies and packet loss have you seen during a failure event? I'm cco-familiar with GLBP. It appears to have essentially the same timing knobs with the ability to actively load balance traffic. Is my assumption that some traffic will not experience any packet loss if it is not using the failed path correct? For anyone who has used this, was the added complexity of this protocol worth it? As a general question... are people looking at implementing BFD? http://www.ietf.org/html.charters/bfd-charter.html Here I'm draft-familiar with what this is and I believe some vendors have code for it, but I've yet to try it. I believe the spec is held up for security and IESG review. This work looks very useful for some related applications going forward. For this crowd, is this deployable and useful for minimizing forwarding failure time? This doesn't appear to be on the roadmap for HSRP/GLBP from what I can tell, but perhaps that would a worthwhile application of BFD? Are there other things people are doing (besides plain old load sharing) to get very low latency failover? John
Re: short Botnet list and Cashing in on DoS
On Wed, 20 Oct 2004 15:14:29 -0400 Hannigan, Martin [EMAIL PROTECTED] wrote: [..]we additionally request that they resolve the RR to 127.0.0.3 before they lock out and reload the zone. We picked 127/8 as the standard. RFC 1918 wasn't suitable for obvious reasons. [ I know you know this Martin, but for some list subscribers... ] As I briefly mentioned in the presentation, 127/8 addresses can be problematic also. 127.0.0.3 is better than 127.0.0.1, but there are some cases where a self-inflicted reflection attack can occur. Take for example what happened to some organizations when Blaster struck. Some organizations changed their local recursive DNS servers so that they were authorative for the windowsupdate record and then pointed it to 127.0.0.1. When the Blaster clock struck midnight (so to speak), the attack against that name did not reach Microsoft, but it did tend to result in odd-looking TCP RST packets on the local network. When the worm attacked the resolved name, it attacked 127.0.0.1, which would normally sound fine, but the worm did so using spoofed source addresses. Most infected hosts not having a process on the attacked port (TCP port 80), would issue a TCP reset to the spoofed source. Arguably a dumb thing for a system to do, but regardless it would happily send responses onto the wire to the spoofed address using a source address of 127.0.0.1. A number of admins then began wondering why they were seeing all these RSTs from a loopback address on their net. Hopefully anti-spoofing knobs were enabled and they didn't get far, but even on some local segments that might have caused a noticeable load problem. Interestly some systems will respond to certain types of packets to any local address. So even using something besides 127.0.0.1 may result in this odd reflection behavior. My guess is that it isn't that big a deal since it should be localized as far as anti-spoofing knobs are enabled. I kind of like the idea of using 240/4 for closing names especially since many network operators I suspect are more likely to notice (and hopefully do the host mitigation) when hosts send to bogons rather than when hosts are doing DNS queries that result in bogon answers (even if hosts are querying excessively for them). I may be wrong, but I tend to think 240/4 is in less likely to be in use than any of the other reserved or special use space. In another somewhat related session about DNS issues, when it was suggested that a well known address be used to close with, someone at the mic suggested that using well known addresses for this purpose may not be good practice. I think they were referring in part to Dave Plonka's draft about embedding globally routable addresses in hardware, which I don't think applies in this case, but maybe I missed something. Their argument may be worthwhile to consider for other reasons. John
Re: NANOG 32 PGP key signing
On Tue, 5 Oct 2004 13:58:55 +0100 Jonathan McDowell [EMAIL PROTECTED] wrote: http://www.nanog.org/pgp.html There doesn't seem to be a lot of emphasis on identity verification according to this page. It only says You might want to bring photo id [...] http://sion.quickie.net/keysigning.txt Thanks for the feedback. Other than a few minor host and link details I don't think that page has changed much over the last few meetings. I'll suggest some wording changes to the NANOG support folks offlist. In the meantime I've included the above link and the GnuPG Keysigning Party HOWTO document as notes to the event listing: https://www.biglumber.com/x/web?ev=16478 In the past we've had each key fingerprint read off so participants can verify they have the correct signature. Len's method is certianly faster, which most people would probably appreciate. However, we often have a lot of last minute participants, which means ensuring everyone is on the list and everyone has their own hard copy of the list with the correct md5/sha1 hash may be a problem. If no one has any major complaints, we can use Len's method for those who get their keys into the keyring on biglumber a few working days before the meeting. That will give early participants time to bring their own hard copy of the list and make verification for those keys go that much faster for those who are planning ahead. Keys submitted after that period and up to 6:00 p.m. on Monday can be verified as we've always done, with each fingerprint being read and verified in front of the group or between individuals. This means two lists. One for Len's method and another for late participants (it'll include all keys for completeness). John
NANOG 32 PGP key signing
Those of you attending NANOG 32 are encouraged to submit your public PGP key to take part in the regular key signing event. Even if you may not be able to attend the group PGP key signing event, but will be at NANOG 32, you are encouraged to submit your key anyway. You can always meet up with other PGP key users and sign keys on your own time. Adding your public key to NANOG 32's key ring is easy. Just upload your public key here: https://www.biglumber.com/x/web?keyring=7840 or send me an email with your PGP public key inline with the subject 'NANOG PGP key' if you'd rather and I'll do it for you. As usual, the group PGP signing event will occur following the nsp-sec BoF on Monday night. Full details for the NANOG 32 PGP signing event can be found here: http://www.nanog.org/pgp.html Feel free to drop me an email offlist if you need any assistance, particularly if you are setting up PGP for the first time. John
Re: Log Analizing tool for Cisco and Juniper router (switch)
On Tue, 21 Sep 2004 22:49:36 +0800 (CST) Joe Shen [EMAIL PROTECTED] wrote: We want to analize log from Cisco and Juniper Router and switch periodically. cislog on the following page is Cisco specific, but you may find it useful: http://aharp.ittns.northwestern.edu/software/ It is basically a bunch of Perl regex's and some Top X reports, plus a summary of hourly log count. I haven't gotten around to packaging up the Juniper equivalent yet. John
Re: Peering point speed publicly available?
On Thu, 1 Jul 2004 19:09:52 -0500 Erik Amundson [EMAIL PROTECTED] wrote: I have a question regarding information on my ISP's peering relationships. Are the speeds of some or all peering relationships public knowledge, and if so, where can I find this? By speed, I mean bandwidth (DS3, OC3, 100Mbps, In addition to some of the other answers, you can sometimes discover peering relationships and even infer some routing policing at public exchanges if you 1) have access to a host on each of the provider's networks (near the exchange preferably) and 2) you can rely on the public address scheme provided by the exchange operator to be used for peering. So for example, if you have a host on provider X's network, run a series of traceroutes to each of the exchange IP space. If the traceroute reaches the far side, you can infer that peering is established. If not traceroute results in a TTL failure or unreachable message, you can infer that peering is not established. Finding hosts behind each network is often as easy as finding publicly accessible traceroute pages such as those found on traceroute.org. Note, this is far from full proof. For a number of reasons there will be false positives and false negatives if you try to rely on this as the only source of info for peering discovery. John
Re: ntp config tech note
On Thu, 20 May 2004 21:08:43 -0700 Michael Sinatra [EMAIL PROTECTED] wrote: I run two stratum-1 servers and a few stratum-2s and I provide time via multicast (224.0.0.1), but I don't use it for my servers, except for Presumably you meant 224.0.1.1. testing and verification. I am also providing anycast ntp, and, if the belt and suspenders weren't enough, I am experimenting with manycast. Noting that NTP uses more than a reply response message exchange. No concerns about session breakage? SNTP would certainly be a very viable candidate for anycast. Except in the extreme case such as wisc.edu's unfortunate experience, does multicast buy much? Traffic loads for properly running clients and distributed servers tend to be relatively low in my experience. John
Re: ntp config tech note
On Thu, 20 May 2004 17:33:22 -0400 Jared Mauch [EMAIL PROTECTED] wrote: I'm also wondering, how many people are using the ntp.mcast.net messages to sync their clocks? what about providing ntp We have had one user that I know of who was receiving time sync info via multicast announcements, but personally I don't care for doing NTP this way. In my experience systems/users don't bother to do any sort of authentication or filtering on NTP sources. Most server admins and some implementations do not support authentication either. I'm pretty sure I don't want to get time from just anyone who sends to 224.0.1.1 especially on networks connected to the multicast-enabled Internet. That group address I might note is one I tend to scope at admin boundaries for just that reason. John
Re: MD5 proliferation statistics
On Thu, 6 May 2004 17:52:16 -0400 Patrick W.Gilmore [EMAIL PROTECTED] wrote: Unfortunately, my organization was not passive until we got to see what the threat actually was, so our numbers are not useful. Would any traffic-carrying-organization care to discuss their numbers? http://www.cctec.com/maillists/nanog/historical/0109/msg01381.html After that post, DePaul's peering sessions peaked at about 50. If I'm not mistaken, only 1 new peer would not do MD5. The number doing MD5 for the first time probably went up slightly as well. In the end, one of those organizations who wouldn't do MD5 is no longer in operation and another, well, I'm here now and that was something on my list of to-do's. :-) John
Re: TCP/BGP vulnerability - easier than you think
On Wed, 21 Apr 2004 21:00:55 +0100 (IST) Paul Jakma [EMAIL PROTECTED] wrote: risk of crypto DoS than compared to the simple BGP TCP MD5 hack. The risk is due to MD5, not IPSec :). I would say the risk is due to implementation. If the vendor's gear vomits quicker due to a resource consumption issue in handling MD5, is this really a problem with MD5? These issues can usually be fixed by simply improving the scaling properties of the implementation that may be required during adverse conditions. John
Re: Microsoft XP SP2 (was Re: Lazy network operators - NOT)
On 19 Apr 2004 22:16:58 + Paul Vixie [EMAIL PROTECTED] wrote: [(*) wierd could mean streams of tcp/syn or tcp/rst, or forged source addresses, or streams of unanswered udp, or streams of ourbound tcp/25, or udp/137..139, or who knows what it'll be by this time next month?] Precisely. It could be most anything and likely will be eventually. Why not stop the hacks that are filtering, whitelists and rate limiting and just replace end hosts with dumb terminals, the links with fixed rate channels and in the network place all the controls and content? Instead of network service providers we would mostly be a collection of systems operators. inside the headend, or whatever), it's going to get done by the dreaded giant merciless monster known as market forces. This and the installed base is probably why the above won't occur over night, but things are veering in that direction. While end users will resist many attempts to remove their freedom of bits, freedom of cpu and freedom of connectivity, what is being designed, or better, re-designed is a network with a very fragile infrastructure. This is good for no one. The ideas about tussle (D. Clark, et al) are a way to think about the problems and solutions, but still the difficulty, because of market forces and installed base, is how to get there from here. John
Re: who offers cheap (personal) 1U colo?
On Mon, 15 Mar 2004 23:17:27 -0500 (EST) Andrew Dorsett [EMAIL PROTECTED] wrote: I'm not referring to the time required to implement. I'm talking about the time it takes for the user. On the user end. Lets do some simple math. Lets say I turn on my laptop before I shower, I power it down during the day while I'm in class and I turn it back on when I get home in the evening. This means two logins per day. Lets say that the login The systems I've my familiar with require only a single login per quarter, semester or school year unless there is a manual de-registration, which is most often due to a AUP violation or system compromise. John
Re: who offers cheap (personal) 1U colo?
On Sun, 14 Mar 2004 01:29:29 -0500 (EST) Andrew Dorsett [EMAIL PROTECTED] wrote: This is a topic I get very soap-boxish about. I have too many problems with providers who don't understand the college student market. I can There are certain environments where it would be nice for people to have spent some time. Working at a university would be one good experience for many people, particularly in this field, to have had. think of one university who requires students to login through a web portal before giving them a routable address. This is such a waste of time for both parties. Sure it makes tracking down the abusers much easier, but is it worth the time and effort to manage? This is a very In most implementations I'm familiar with, the time and effort is mostly spent in the initial deployment of such a system. legitimate idea for public portals in common areas, but not in dorm rooms. In a dorm room situation or an apartment situation, you again know the physical port the DHCP request came in on. You then know which room that port is connected to and you therefore have a general idea of who the abuser is. So whats the big deal if you turn off the ports to the room until the users complain and the problem is resolved? As someone else mentioned, an AUP may be a reason for such a system. In addition, these systems often allow an i.d. to be notified, restricted or disabled and not just from a single port, but from any port where this system is used. Also know that some schools' dorm resident information is not populated nor easily accessible in network connectivity records. The portal systems are often used as a way to be proactive in testing a dorm user's system for vulnerabilities and allowing minimal connectivity for getting fixed up if they are. This is often referred to as the quarantine network. Many institutions have tried to simply turn off a port and deal with the problem when a user calls. Sometimes the user moves, but even if they don't this doesn't scale very well for widespread problems such as some of the more common worms and viruses that infect a large population. A lot of institutions don't have 24x7 support to handle calls from dorm students who are often up til midnight or later doing work. Many systems can have the connection registration pulled, forcing a new registration immediately. This may be due to proactive scanning or simply to refresh the database at the end of a school year. I guess this requires very detailed cable map databases and is something some providers are relunctant to develop. Scary thought. Correct, this is a problem for universities too. Especially when many of their cabling systems are old and have often been managed (or not) by transient workers (e.g. student employees) over the years. John
Re: Platinum accounts for the Internet (was Re: who offers cheap (personal) 1U colo?)
On 15 Mar 2004 08:01:15 -0500 Robert E. Seastrom [EMAIL PROTECTED] wrote: Maybe NANOG needs to implement a system where you have to log in to a web page with your NANOG meeting passcode in order to get a usable IP address. Then, when an infected computer shows [...] Seconded. This is dirt simple to do. If we believe in public humiliation, a list of infected machines and their owners (along with [...] In the case of some networks and some type of malware, you might need to do more than this. For example, if a compromised host continues to spew out packets without a valid IP, this still eats link capacity. If the network is relatively flat, which is often is in wireless configurations, you still have a problem to solve before normal access for everyone else is restored. John
Re: Clueless service restrictions (was RE: Anti-spam System Idea)
On Tue, 17 Feb 2004 21:48:18 + Alex Bligh [EMAIL PROTECTED] wrote: a) Some forms of filtering, which do occasionally prevent the customer from using their target application, are in general good, as the operational (see, on topic) impact of *not* applying tends to be worse than the disruption of applying them. Examples: source IP filtering on ingress, BGP route filtering. Both of these are known to break harmless applications. I would suggest both are good things. There are some potential applications that these can break also. For example, a distributed application that sends out probes might wish to use the source IP address of a remote collector that is used to measure time delay or network path information. If Lumeta could have tunnels to a bunch of hosts, send traceroutes to various Internet places through those tunnels and have the tunneled hosts use Lumeta's IP as the source IP, they could build a pretty cool distributed peacock map. It is of course difficult to find a way to use these legitimate types of apps today without the infrastructure succumbing to attacks such as the source spoofed DoS traffic floods. John
Re: Outbound Route Optimization
On Mon, 26 Jan 2004 10:30:38 -0500 [EMAIL PROTECTED] wrote: Yes, we can probably make something better than BGP. But will we be able to understand it? I thought this was a good measure of that question... from the current draft-irtf-routing-reqs draft: 2.1.17 Simplicity The architecture MUST be simple enough so that Radia Perlman can explain all the important concepts in less than an hour. :-) John
Re: China Telecom filtering nameservers
On Wed, Oct 22, 2003 at 11:23:08PM -0700, Joe Zhu wrote: well...if it's really problem, someone will help. But if it's smart a$$ comment like this, not sure. I'm not sure what exactly you took offense too, but if I offended someone, particularly our international neighbors I apologize. In my experience there has been a language barrier problem, which has hindered even the initial dialog via email or telephone. My intention was to convey that the issue is a known problem, what I thought might be going on and that I didn't yet have a good solution. John
Re: China Telecom filtering nameservers
On Wed, Oct 22, 2003 at 02:57:55PM -0400, Daniel Medina wrote: Our main nameservers are being filtered from networks managed by CHINANET, Data Communications Division, China Telecom All traffic from our nameservers (ICMP, DNS queries, etc) is being dropped. As a result, many websites and mail servers with DNS hosted on This has been seen elsewhere too and contacting someone at chinanet has been difficult. Your DNS may be filtered if it supports recursive lookups from chinanet. Forgetting for a moment the potential problems with doing that, its been suggested that chinanet is filtering DNS as a means to prevent their users from pointing at your DNS to get to sites that they may deem inappropriate (censorship). Does anyone know of any good contacts at China Telecom, or know if there's some explanation for this? This is a problem. If you have someone on staff that can speak Chinese, that might be useful. I haven't heard of email to the public addresses resulting in any meaningful response. I can't point to a specific contact at the moment, but I have been peripherally involved in this problem and if something definitively useful comes out of it I can follow up with you in private email. John
Use squid cache at NANOG29
NANOG29 attendees, Help make my SSH sessions more responsive, use the squid cache. :-) http://www.nanog.org/squid.html John
Re: ICMP Blocking Woes
On Tue, Sep 30, 2003 at 05:22:25PM -0700, Crist Clark wrote: Wasn't this based upon the premise that gear should not return ICMP errors as a result of ICMP packet input as a precaution against error loops? ie said dodgy router did the _right_ thing? That would be disingenious. RFC1122 clearly lists which ICMP are error messages, The following from W. Richard Stevens' archive presents some additional insight: http://www.kohala.com/start/papers.others/vanj.99feb08.txt John
Re: Worst design decisions?
On Thu, 18 Sep 2003 09:53:38 -0400 Daryl G. Jurbala [EMAIL PROTECTED] wrote: * And how about this: Cisco: PICK A BUSINESS END ON YOUR SMALL OFFICE ROUTING EQUIPMENT. Most of my less clued customer like to help out and rack the equipment ahead of time. And it always gets done pretty side out. Yeah..the side with a Cisco logo and three lights. It sure does look like it should be the front, but it's useless that way. Maybe putting the power on that side would clue people in to the fact that it's basically useless to point that at the easy-access side of the rack. I wouldn't consider that a design flaw. In fact, in some environments that may be the preferred way of doing it. Not only will it look nice and neat, but if the side of the box where all the connections are located on is less accessible to humans that may help lessen opportunity for someone to touch something they shouldn't be touching. Unless your devices are constantly being re-cabled, this might be considered good design practice. John
Re: News of ISC Developing BIND Patch
On Thu, 18 Sep 2003 15:10:57 -0400 (EDT) [EMAIL PROTECTED] wrote: manufacturer assigned macs are guaranteed to be globally unique. Theoretically. I didn't experience it personally, but I believe there was at least one fairly well known event a few years back where a manufacturer shipped cards with duplicated UAAs. A specific enterprise reconfiguring the mac is akin to an enterprise using RFC1918 space. Fortunately, this practice rarely occurs these days (token ring / SNA shops often did this) although I'd be curious if anyone still does it. Unfortunately, in my opinion, some of the relevant lessons learned from using LAAs (and their demise) didn't take hold at layer 3. John
Re: Port blocking last resort in fight against virus
On Wed, 13 Aug 2003 09:10:32 +0200 Robert Raszuk [EMAIL PROTECTED] wrote: That is fine. The amount of information to be carried is easily extensible. So if you can help us to determine the required fields we will be more then glad to add them. Deploying this as a signalling protocol that is separated from BGP may make sense. Although the ease/speed of deployment by using BGP may make this a worthwhile effort. John
What you don't want to hear from a peer
I think its safe to post this now... the AS who asked me this now seems to be gone. Keep in mind we're just a po' little school under the El in Chicago and the network asking was a seemingly large Central/South American provider who was bringing in an OC12 to AADS (compared to our OC3). Maybe it'll help start the weekend with a smile. Subject: Re: peering at the aads Date: Tue, 27 Aug 2002 08:45:52 -0500 John . I have two question if you can help me . 1) Do we have to use the bgp-password ? 2) Can depaul adv our asn's and Ip out , We are almost dead in the water ,we have 20 peers but no traffic , we need 3M is that possible ? John
Re: Mailing list for AADS participants
On Thu, 26 Jun 2003 17:24:14 -0500 Jeff Bartig [EMAIL PROTECTED] wrote: effort to promote peering at the NAP. Have you gotten any other interest in it? About 7 replies so far, which may not warrant it I'm not sure. It would probably have been much more useful if we had it a few years ago. Maybe it could be made a little more generic to include other Chicago facilities (e.g. Starlight, Equinix) - which is selfishly what I care about, because that is where we are - but as I said I'd entertain hosting others. It would be nice to have a place where I can send facility specific queries, downtown follow-ups and outage notifications to. The list would probably be very light in volume. ...and better than maintaining emails to all the peers due to infrequency of contact, comings, goings and bounces. In addition, an archive of exchange/nap specific activity might serve some useful purposes for historians and researchers. John
Mailing list for AADS participants
Regardless of what many of you may think of AADS generally, are there people who would be interested in joining an AADS mailing list, primarily to be used for broadcasting downtime notices or for discussing Chicago NAP specific issues. Perhaps a mailing list for other specific exchanges may be in order also and I'll entertain hosting those as well, but I'm presently interested in AADS since that is where we are at. Please reply offline. If there appears to be enough interest I'll setup the list and make a single announcement here. Please pass this on to others as appropriate. John
Re: Network discovery and mapping
On Sun, Jun 22, 2003 at 09:24:58PM -0400, Sean Donelan wrote: gaps between entities I'm interested in mapping. I want to discover and map the connections indviduals may know about, but no one realized how all the pieces were connected. So far the recommendations have included [...] I'm not one to push commercial products, but I don't know of a freely available tool that does the equivalent of what Lumeta http://www.lumeta.com does. This being the solution based on the original work of Cheswick and Burch. This may be just the kind of thing if you need to discover unexpected or even unknown paths. John
Re: Fast TCP?
On Wed, Jun 04, 2003 at 11:41:22PM -0400, Deepak Jain wrote: causes far more severe problems. Since RED causes packet drops, high speed streams that get RED'd are in an immense world of pain. Further, since a In some experience I've had RED did not cause drops. In fact, I have some data showing how drops increased without RED. http://condor.depaul.edu/~jkristof/red/ I'd like to see (or actually perform them myself if I could :-) some actual tests. If anyone has any updated data doing AQM on high speed links or large streams, please post pointers. John
Re: NAT for an ISP
On Wed, Jun 04, 2003 at 06:48:01PM -0400, Dan Armstrong wrote: More stuff to manage if we push it out to the CPE. Push it out even further. John
Re: Using Policy Routing to stop DoS attacks
On Tue, 25 Mar 2003 09:06:01 -0500 Christian Liendo [EMAIL PROTECTED] wrote: I am sorry if this was discussed before, but I cannot seem to find this. I want to use source routing as a way to stop a DoS rather than use access-lists. If you fooled the router into thinking that the reverse path for the source is on another another interface and then used strict unicast RPF checking, that may accomplish what you want without using ACLs. I don't know what impact it would have on your CPU however, you'll have to investigate or provide more details. Note, depending on the platform and configuration, filters/ACLs may have an insignficant impact on the CPU. If they don't, don't forget to complain to your vendor. :-) John
Re: Route Supression Problem
On Wed, Mar 12, 2003 at 06:53:03AM -0600, Jack Bates wrote: traffic going to them. My router shows the last BGP peer reset about that [...] I've not seen reference to it, since the customer only transits through my network and depends on my redundancy, is it possible to hold his routes in the tables and keep advertising them out unless they are down for a set time period (ie, ignore flaps, but drop them if he's down 15-30 minutes)? While perhaps not always an ideal solution, is it possible for the customer to set default to you rather than having to use BGP? You could in turn use static routing back to them for their netblock(s). John
Re: M$SQL cleanup incentives
On Fri, 21 Feb 2003 17:25:46 -0500 William Allen Simpson [EMAIL PROTECTED] wrote: I've been pretty disappointed with some of the responses on this issue. Maybe you won't like this one either, but here goes. I'd be very interested in hearing how opeators feel about 'pushback'. It may make more sense near ingress edges or where there is limited aggregate capacity on the egress (a bottleneck), but debating that point is probably secondary. You can refer to some of the material, particularly by Bellovin, Floyd and others here: http://www.icir.org/pushback/ In the simplest scenario, pushback could be similarly deployed to the way RED is deployed (if you consider that easy or useful or not, I'm not sure). Signals do not even necessarily need to propagate to upstream routers, rather anomalous traffic (based on a simple, hopefully, policy) could be dropped more aggressively. This response could be automatic or require intervention. I think there are a number interesting properties to this approach, especially since if it behaves similar as one might hope, it could still allow some valid traffic through. Hint: think about what will happen if a Slammer/Sapphire-like worm hits port 25/53/80 and cannot be easily filtered without affecting all traffic on those ports. Coming up with a policy that determines what is anomalous is one of the hard parts. Vendor implementation being another, but you can kind of do this sort of thing already if you're so inclined. Thoughts? John
Locating rogue APs
Apologies if this ends up on the list multiple times. I seem to have trouble getting this posted in a timely fashion. In general, MAC OUI designations may indicate a particular AP. IP multicast group participation may also be used by some APs. Some APs have a few unique ports open. Lastly, APs may be found with a radio on a particular default channel. All of these potentially identifying characteristics may be used to help audit the network for rogue IPs. Below is information on locating particular APs: Multicast Groups 224.0.1.40 Cisco/Aironet (newer versions) 224.0.1.76 Lucent/Avaya 224.1.0.1Cisco/Aironet You can locate who group members are by doing the following on a Cisco router: show ip igmp group group-ip-address Protocols/Ports --- Cisco/Aironet APs have two UDP ports open: 2887 and . Well known AP MAC OUIs -- f0 Samsung 00022d Lucent (Orinoco) 0002b3 Intel 00032f Global Sun Technology (Linksys) 00045a Linksys 0010e7 BreezeCom (BreezeNet) 0020d8 NetWave Technologies (BayNetworks) 003065 Apple 004005 ANI Communications 004096 Aironet 00508b Compaq 00601d Lucent (WaveLan) 0090d1 Leichu Enterprise Co. (Addtron) 00a0f8 Symbol Technologies 00e029 Standard Microsystems Corp. 080002 3Com 080046 Sony Well known AP default channels -- 4: Lucent 6: Aironet, Compaq, BreezeNet John
Re: Locating rogue APs
On Tue, Feb 11, 2003 at 01:02:34PM -0700, Tony Rall wrote: It sounds like John is referring to using a network IDS system, maybe one per subnet, to try to infer from the wired (maybe) network traffic that an unwanted AP is connected to your wired network. Given that you may want Actually, the info was to meant to provide operators with very rudimentary AP tracking info that can mostly be done from the network devices. If someone has login access to a switch/router, you can use the MAC and IGMP address info to identify potential APs fairly easily at the CLI or via scripts. If there is incorrect or missing information, as I mentioned at the mic, I'd appreciate any updates. Feel free to send them to me via private email and I can send out an update if there is interest. John
Re: FW: Re: Is there a line of defense against Distributed Reflective attacks?
On Sat, Jan 18, 2003 at 08:58:13AM -0500, Daniel Senie wrote: While it's nice that router vendors implemented unicast RPF to make configuration in some cases easier, using simple ACLs isn't necessarily hard at the edges either. It might be nice if all router vendors were able to associate the interface configured address(es)/nets as a variable for ingress filters. So for in the Cisco world, a simple example would be: interface Serial0 ip address 192.0.2.1 255.255.255.128 ip access-group 100 in ! interface Serial1 ip address 192.0.2.129 255.255.255.128 ip access-group 100 in ! access-list 100 permit ip $interface-routes any access-list 100 deny ip any any Those sorts of features could make the scaling issue much easier for large providers and environments where routers may have lots of interfaces. An operator could also essentially build tools to automatically configure/verify configurations this way, but I think it would be better for the router vendors to do this for us. John
Re: Is there a line of defense against Distributed Reflective attacks?
On Thu, Jan 16, 2003 at 08:48:03PM -0500, Brad Laue wrote: Having researched this in-depth after reading a rather cursory article on the topic (http://grc.com/dos/drdos.htm), only two main methods come to my mind to protect against it. There are a few more methods, some have already mentioned including something called pushback. Very few solutions, particularly elegant ones are widely deployed today. At some point, sophisticated (or even not so sophisticated) DoS attacks can be hard to distinguish between valid traffic, particularly if widely distributed and traffic is as valid looking as any other bit of traffic. By way of quick review, such an attack is carried out by forging the source address of the target host and sending large quantities of packets toward a high-bandwidth middleman or several such. It doesn't have to be forged, that step just makes it harder to trace back to the original source. There are some solutions that try to deal with this, including an IETF working group called itrace. UUNET also developed something called CenterTrack. BBN has something called Source Path Isolation Engine (SPIE). There are probably other things I'm forgetting, but generally are similar in concept to these. To my knowledge the network encompassing the target host is largely unable to protect itself other than 'poisoning' the route to the host in question. This succeeds in minimizing the impact of such an attack on This is true, the survivability of the victim largely depends on the security of everyone else, which makes solving the problem so exceptionally difficult. the network itself, but also acheives the end of removing the target host from the Internet entirely. Additionally, if the targetted host is a router, little if anything can be done to stop that network from going down. I'm not sure I fully understand what you're saying here, but a router can be effectively be taken out of service as any other end host or network can by simply overwhelming it with packets to process (for itself or to be forwarded). One method that comes to mind that can slow the incoming traffic in a more distributed way is ECN (explicit congestion notification), but it doesn't seem as though the implementation of ECN is a priority for many small or large networks (correct me if I'm wrong on this point). If ECN ECN cannot be an effective solution unless you trust all edge hosts, including the attacking hosts, will use it. Since it is a mechanism that is used to signal transmitting hosts to slow down, attackers can choose not to implement ECN or ignore ECN signals. Unless you could control all the ends hosts, and as long as there is intelligence in the end hosts a user could modify, this won't help. is a practical solution to an attack of this kind, what prevents its implementation? Lack of awareness, or other? It is still fairly new and not widely deployed. Routers need not only to support it, but also have to be enabled to use it. It is a fairly significant change to the way congestion control is currently done in the Internet and it will take some time before penetration occurs. Also, are there other methods of protecting a targetted network from losing functionality during such an attack? Many are reactive, often because you can't know what a DoS is until its happening. In that case, providers can use BGP advertisements to blackhole hosts or networks (though that can essentially finish the job the attacker started). If attacks target a DNS name, the end hosts can change their IP address (though DNS servers may still get pounded). If anything unique about the attack traffic can be determined, filters or rate limits can be placed as close to the sources as possible to block it (and that fails as attack traffic becomes increasingly dispersed and identical to valid traffic). If more capacity than attack traffic uses can be obtained, the attack could be ignored or mitigated (but this might be expensive and impractical). If the sources can be tracked, perhaps they can be stopped (but large number of sources make this a scaling issue and sometimes not all responsible parties are as cooperative or friendly as you might like). There is also the threat of legal response, which could encourage networks and hosts to stop and prevent attacks in the future (this could have negative impacts for the openness of the net and potentially be difficult to enforce when multiple jurisdiations are involved). From a proactive approach, hosts could be secured to prevent an outsider from using it for attack. The sorry state of system security doesn't seem to be getting better and even if we had perfect end system security, an attacker could still use their own system(s) to launch attacks. Eventually it all boils down to a physical security problem. Pricing models can be used to make it expensive to send attack traffic. How to do the billing and who to bill might not be so easy. ...and there may
Re: Is there a line of defense against Distributed Reflective attacks?
On Fri, 17 Jan 2003 18:38:08 + (GMT) Christopher L. Morrow [EMAIL PROTECTED] wrote: has something called Source Path Isolation Engine (SPIE). There This would be cool to see a design/whitepaper for.. Kelly? In addition to David's link: http://www.ir.bbn.com/projects/SPIE/ mentioned, which penalize or limit high rate flows are not widely deployed yet. (see above, is this what you really want?) I happen to like the idea of using something like a RED queue that can more aggressively drop traffic that is 'out of profile' in times of congestion. Like most things, this probably really works best at the edges of the network, but my gut feeling is that it can be a relatively fair and elegant approach. However, it doesn't really solve the DoS problem, it is really trying to just solve a congestion problem, but it may have some nice side effects. For example, I'm planning on trying out some new features from our border router vendor, where we set a more aggressive RED drop profile per source IP within our netblock where the source exceeds a configured transmission rate. The basic idea being to get the high load offering sources to slow down in times of high usage/congestion. Hopefully they use TCP, but if not, perhaps drop even more aggressively? If the capacity is there, high load sources get through. So, this doesn't stop attacks, but tries to keep some valid data flowing through a limited egress pipe or in other words, try to provide some fairness between multiple sources in times of high load. Of course, if everyone hits the ENTER key at the same time this does't work, but hopefully statistically multiplexing is working as well as it always has for us. John
Re: AOL Cogent
On Sun, Dec 29, 2002 at 09:12:16PM +, Paul Vixie wrote: per-bit revenue for high tier network owners would turn into per-port revenue for exchange point operators. where's the market in that? how I think you just answered your own question. Exchange point operations. could a high tier even exist in those conditions? I think its a difficult market to exist in anyway. It may be that networks can make revenue on characteristics of their network other than simply bps. The quality (read: latency, loss factor, transparency/or-not, connectedness) and services (read: various types of servers such as for games, voice/multimedia gateways, storage, flexibility - perhaps deliver service further than the smartjack?) may be what differeniates one from another. This doesn't seem to be happening though and I'm not sure how likely it will be. If the market is soley about the number of bits, soon this might not be an attractive market for a lot of providers to be in. If there are a lot of suppliers and the ease of changing suppliers is simple (good reason for you to want to get rid of NAT :-), the market will be commoditized with consumers simply moving their connections around to Cogent-like providers on a month-to-month basis. This assumes most suppliers provide a reasonable level of quality, which most probably do. If there are only a few suppliers (oligopoly?), little choice and strong barriers to entry, it might be a much more attractive market to be in. As a customer, I'd like to see the former more than the latter. Perhaps then the services above would be more forthcoming? John
Re: dontaing bgp config files [Re: Risk of Internet collapse grows]
On Sun, 1 Dec 2002 23:03:22 -0800 (PST) Ratul Mahajan [EMAIL PROTECTED] wrote: speaking neighbor), you can help us by donating your bgp config files. abstracted or anonymized versions are ok. Of possible general interest to the list, I had begun work over a year ago in 'mapping' out peering arrangements at various exchanges using simple packet probing techniques (traceroutes mostly from behind various providers nets) and gathering available public data. If anyone wants to see the data or more info, let me know and I'll make it available. Its only mildly interesting, but would be a useful method in developing maps ala the Lumeta method. It takes a significant amount of time (very difficult to automate) and energy to do this work, so its not all that reliable, practical or timely in many cases. John
Re: The power of water
At 2:03 PM -0400 10/19/02, Sean Donelan wrote: Stuff happens to everyone, its how you respond. Would your company have been able to recover as quickly? Over one weekend I was part of a team of folks involved in moving a voice/data center for a fairly sizeable regional office across the city in Atlanta. We had pretty much everything moved, installed and working by Saturday night. Early Sunday morning as we were tidying up, someone called over the portable radios that it was 'raining in the data center!' If I remember correctly, it was a return pipe for the cooling system had come apart at a poorly soldered connection in the ceiling above the UPS. Soon enough a good portion of the floor tiles came down and the underneath the raised floor a nice pond was forming. Since it was a Sunday morning, it took some time before some building engineers could be called in to turn off building water. By that time, the 1' raised floor was pretty well full. We were on an upper floor, maybe 7 or so. I don't remember actual amount of water, but there was water draining into the third level deep parking garage below. Some servers had been permanantly damaged, but most equipment including some servers (remember NetFRAMEs?), hubs, routers, the cabling system and a PBX survived to provide service until the insurance company provided funds for everything to be replaced. Having good vendor relationships helped significantly in my experience. Many were overnighting equipment without requiring a lot of red tape. Having access to a bunch of hair dryers was also useful. :-) John
Congestion at SBC/AADS NAP?
Has anyone seen what may be ATM level congestion at the Chicago NAP recently? ...or have you seen it in the recent past? We're having trouble pinpointing a problem, which may have been occurring for a long time, but just now really beginning to affect us significantly. We are seeing latency on certain peer sessions go through the roof (often more so in one direction than the other) and cannot (at least not yet) attribute it to hardware/link issues or problems with our edge device. Thanks for any insight, John
Re: Paul's Mailfrom (Was: IETF SMTP Working Group Proposal at smtpng.org)
On Tue, 27 Aug 2002 00:59:49 +0200 Jeroen Massar [EMAIL PROTECTED] wrote: Nice rant Randy, but if you even ever wondered why the wording Mail Relay exists you might see that if an ISP simply forwards all outgoing tcp port 25 traffic to one of their relays and protects that from weird spam The point is that 25 is just a number. You'll eventually be blocking all numbers sooner or later (and re-inventing dumb terminals). John
Re: Paul's Mailfrom (Was: IETF SMTP Working Group Proposal at smtpng.org)
On Tue, 27 Aug 2002 01:54:39 +0200 Jeroen Massar [EMAIL PROTECTED] wrote: SMTP is a protocol which is based on relaying messages from one mailserver to another. An endnode (especially workstations) don't need to run SMTP. I'm not sure how to truly disable an SMTP server from running on an end host. You can block or force forward port 25, but that is just a number. Be prepared to start doing that for all ports, then protocols, then IP addresses, then protocols again. Furthermore, a forced relay, while perhaps helping to solve the immediate spam problem is most definitely interfering on other things with potentially harmful long term effects. Two of those are end-to-end transparency and the fixing of the real problem. You may not care about either of those, but I would argue they shouldn't be dismissed without very serious thought. So what's so bad about forwarding all tcp/25 traffic over that relay and letting that relay decide if the MAIL FROM: is allowed to be relayed? And if a client wants to mail from another domain which isn't There are some potential problems. Don't bother answering them, I'm sure they can be disputed, but I'm also sure there are plenty of other examples an SMTP expert could think of: What if there is a new SMTP specification that doesn't work through the forced relay? What about simply not trusting a relay to do the right thing or for fear of a forced relay adding/changing/snooping/delaying the traffic? What about when SMTP starts going over something other than TCP port 25? The whole problem is yet again that a small amount of people (this time spammers) make a whole lot of problems for a lot of people (we). Maybe some different thinking is called for. Here are some other suggestions, take them or leave them. They aren't perfect either (don't try and answer these either, I'm sure they can be disputed :-): Force forward by default, but allow anyone who wants to use TCP port 25 the ability to do so. They must sign an non-abuse agreement or whatever. Then they get their host/link put into the TCP port 25 open path. Do some rate-limiting by default. Perhaps coupled with the above? Start offering spam blocking and filtering services for end users. Get better at monitoring and incident response. This will pay dividends for lots of other areas as well. ...and finally to quote Randy, send code. :-) John
Re: Paul's Mailfrom (Was: IETF SMTP Working Group Proposal at smtpng.org)
On Tue, Aug 27, 2002 at 12:14:46PM +1000, Martin wrote: but surely an MTA derives it's usefulness by running on port 25. i don't remember reading about where in the DNS MX RR you could specify what port the MTA would be listening on... Surely your not a spammer looking for tips are you? :-) John
Re: Best Current Practices for Routing Protocol Security
On Wed, Aug 14, 2002 at 01:23:01PM -0400, Sean Donelan wrote: 4. Don't exchange routing information with external parties And don't trust them. Use limits on the amount of prefixes you're willing to accept. Verify routes received with some third party (e.g. routing database). 5. Explicit routing neighbor assocations - passive-interface default Both inbound and outbound. On Cisco's, in addition to passive-interface you might do 'distribute-list 1 in interface' where 1 is an ACL that can be simply 'deny any'. 6. Address validation on all edge devices Filter to only allow neighbor IPs to the specific routing protocol. For example on a BGP peer, filter TCP port 179 on each peer interface to only allow the expected peer IP. Also: Apply damping as appropriate, but protect subnets serving root DNS servers from accidental damping. Limit maximum prefix length you're willing to accept. Make extensive use of remote logging and monitoring. Keep an eye on routing table changes over time and the overall operation of the routers. Filter out known bogus routes such as reserved, private, and special use address space as appropriate. John
Cogent issues at AADS PVC 5.34?
We're currently experiencing significant latency through Cogent at AADS. I've heard they have some general latency issues, but nothing concrete yet as to what and where. Does anyone have any details of any problems while we're waiting for a response back from the NOC? Thanks, John
Re: Cogent issues at AADS PVC 5.34?
Thanks to all those who responded. The problem appears to have mysteriously cleared up at the moment. Mysteriously, because I haven't yet heard official word from Cogent or other 3rd party on a definitive cause of the degradation. John
Re: GBLX router upgrade breaks bgp sessions
On Wed, Jul 10, 2002 at 07:04:38AM -0700, nanog wrote: Subject says it all. GBLX upgraded some edge routers to a new JunOS release (possibly 5.3 rev 24)- and now our bgp sessions continually reset with: Jul 10 06:58:24 MST: %BGP-3-NOTIFICATION: sent to neighbor X.X.X.X 3/3 (update missing required attributes) 0 bytes I don't know about gblx, but I saw a problem like this at our border. After JunOS was upgraded to 5.3r2.4 (other side IOS) the session was continually being reset. The bgp session between theser two peers was setup with family inet any (for multicast peering) and when that was removed, the problem went away. I also heard about a problem that may be related I2 was having with their Juniper code, it sounded related, but I haven't investigated the details yet. John
Re: multicast (was Re: Readiness for IPV6)
On Tue, Jul 09, 2002 at 11:16:56AM -0400, Leo Bicknell wrote: It's a cute list. Where's ATT (with all the old Home customers)? Where AOL? Don't see UUNet either. UUNET supports multicast, although the quality of that experience for me wasn't very good. Last I heard its one price to receive multicast and additional to generate multicast through them. John
Re: operational: icmp echo out of control?
On Tue, 28 May 2002 16:16:08 -0400 [EMAIL PROTECTED] wrote: It's common enough that it's got it's own acronym. IWF - Idiot With Firewall. We call them OZZADs and here is how we respond: http://condor.depaul.edu/~jkristof/technotes/incident-response.html John
Re: operational: icmp echo out of control?
We call them OZZADs and here is how we respond: Hmm.. 3 people have asked already What's an OZZAD? ;) So I don't have to keep answering this, forwarded to the group: Over Zealous Zone Alarm Dork John
Re: Certification or College degrees? Was: RE: list problems?
On Wed, 22 May 2002 16:40:27 -0400 Kristian P. Jackson [EMAIL PROTECTED] wrote: network engineers, just as a bunch of network engineers are no more qualified to program. Perhaps a bachelors in network engineering is in order? We actually have that - or something close to it. We are slowly building a bigger networking lab with router-ish stuff for students to learn from. In fact, I'll be handing off full BGP table for them to see and play with in the lab. If you want to help us educate, we'll gladly accept any donations, particularly gear, we can get. :-) http://www.cs.depaul.edu/programs/2002/BachelorNT2002.asp http://ipdweb.cs.depaul.edu/programs/lan/index.html http://condor.depaul.edu/~jkristof/tdc375/ http://condor.depaul.edu/~jkristof/2001Spr365/ John
Re: Large ISPs doing NAT?
On Wed, 1 May 2002 11:00:01 -0400 (EDT) mike harrison [EMAIL PROTECTED] wrote: Almost? I'd say it's hands down an EXCELLENT reason. In some configs though, the NAT'd people can still see each other and cause problems, but it still cuts down the exposure. As well as perpetuates the neglect for fixing the real problem. John