RE: Cisco's Statement about IPR Claimed in draft-ietf-tcpm-tcpsecure
The same document that fully ignores that port number randomness will severely limit the risk of susceptibility to such an attack? How many zombies would it take to search the port number space exhaustively? Irrelevant. The limiting factor here is how many packets can make it to the CPU. Using 10K pps as a nice round (and high) figure, a single machine can do that. Also, many of the calculations I've seen assume much higher pps when calculating time to reset a session. Has anyone done a test to see what a Juniper M5/10/whatever and a GSR can actually take without dropping packets due to rate limiting and/or falling over from being packeted? In some fairly informal tests that I did with an M20/RE3, I had to saturate the PFE - RE link (100Mbps) with packets destined to the RE before routing adjacencies started flapping. Packet size (64-1518 bytes) didn't make much of a difference (larger packets seemed to make things a bit more difficult for the routing protocols), and CPU usage on the RE rarely went above 30% during any test. Streams were sent from random source addresses. Packets that elicited a response from the RE (e.g., pings) didn't appear to have a greater effect on performance than ones that didn't, as there appears to be a good amount of rate-limiting going on internally to keep things reasonably calm. It's documented that pings to the RE are limited to 1000/sec, but it also appears that other packet types such as SYNs are rate-limited in some fashion, either the ingress packets themselves or maybe the responses from the RE. But in any case, whatever rate-limiting was going on didn't appear to be affecting routing adjacencies. Although I didn't try anything too fancy, it appears that it's pretty difficult to bog down the CPU (a PIII 600) on an RE3. Routing adjacencies were only affected with the PFE - RE link became saturated, which isn't surprising. There was no indication of transit traffic being affected, which also isn't surprising given that such packets are handled by ASICs. -Terry
RE: Dumb users spread viruses
There is nothing wrong with a user who thinks they should not have to know how to protect their computer from virus infections. If we (the community who provides them service and software) can't make it safe-by-default, then the problem rests with us, not with the end users. This is somewhat of a surprising position. What is considered safe? How do you make a computer safe from the most irresponsible of users, who will run any executable without thinking twice, other than maybe locking down their access rights to an extent that 1) is probably impractical, and 2) would cause an uproar? It seems there has to be at least some level of basic clue on the user side of things for there to be any hope of this problem going away. As the Internet becomes a commodity, it doesn't seem unreasonable to me to insist that those who use it be versed in the basics of protecting themselves against common threats. No one is asking for expertise -- just the basics would be a big help, wouldn't it? If we accept that there's no such thing as perfect security or completely safe, how do we protect users who assume this isn't the case simply because it's a more convenient assumption for them to make? OpenBSD is reasonably safe by default. But as functionality user-friendliness reach levels that non-technical users require/demand, I'm not seeing how we make systems safe without user cooperation; i.e., basic clue on their part. The Someone else should be completely totally responsible stuff exhibited in the article just doesn't seem reasonable here. Society as a whole could benefit from people taking more responsibility for themselves -- the Internet doesn't seem any different in this regard. -Terry
RE: Strange public traceroutes return private RFC1918 addresses
A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. The performance gain achieved by using jumbo frames outside of very specific LAN scenarios is highly questionable, and they're still not standardized. Are jumbo Internet MTUs seen as a pressing issue by ISPs and vendors these days? -Terry
RE: Strange public traceroutes return private RFC1918 addresses
Leo Bicknell wrote: Since most POS is 4470, adding a jumbo frame GigE edge makes this application work much more efficiently, even if it doesn't enable jumbo (9k) frames end to end. The interesting thing here is it means there absolutely is a PMTU issue, a 9K edge with a 4470 core. This brings up the question of what other MTUs are common on the Internet, as well as which ones are simply defaults (i.e., could easily be increased) and which ones are the result of device/protocol limitations. And why 4470 for POS? Did everyone borrow a vendor's FDDI-like default or is there a technical reason? PPP seems able to use 64k packets (as can the frame-based version of GFP, incidentally, POS's likely replacement). -Terry
RE: [Activity logging archiving tool]
I'm fairly certain that the tacacs standard implementations available on the cisco routers log out changes to the config made by users... That and a little log parsing magic and you have this data also. While we're being Cisco-centric, 12.3(4)T has a new feature by which the router can keep a configuration audit log: http://www.cisco.com/en/US/products/sw/iosswrel/ps5207/products_feature_ guide09186a00801d1e81.html -Terry
RE: ISPs' willingness to take action
[EMAIL PROTECTED] wrote: As I see it, we're experiencing an ever-increasing flood of garbage network traffic. While not all of it is easy or appropriate to target, it seems to me there's some low hanging fruit that could generate serious gains with relatively little investment. I agree to an extent, though I think there are much more reasonable places to start rather than adding IDS functionality to ISP routers and moving to whitelist-only SMTP. Anti-spoof/BGP filtering, DoS tracking/sinkholing, working abuse@ addresses, etc. But the problem is with the end-hosts, so a common viewpoint is that this is where the majority of the cleanup work needs to be done. This was discussed at length not long ago. A few things that make sense to me (as a non-ISP network consultant) include: 1) Summarily fencing/sandboxing/disconnecting clients sending high volumes of spam, virii, etc. You might politely contact your commercial/static clients first, but anyone connecting a bare PC on a broadband circuit is too stupid to deserve coddling. The great majority of your clients would thank you profusely. What if the great majority of your clients are bare PCs on broadband circuits? So, the big question: why don't ISPs do more of this? What's the ROI? The costs have to be offset somehow. How easy is it to convince clients to pay more to be your customer because you're more strict on garbage traffic originating from your network relative to your competitors? Many feel that basic preventative measures like the ones I mentioned are things that all ISPs should do for the sake of making the Internet a better place, or however you want to phrase it. But the decision makers at a lot of ISPs seem to take a different viewpoint, perhaps because their primary concern, as businesses, are dollar signs. -Terry
RE: AOL fixing Microsoft default settings
How many other ISPs intend to follow AOL's practice and use their connection support software to fix the defaults on their customer's Windows computers? Sounds good to me. The potential for these users to be less-than-educated enough about the existance of this feature means that the potential for this to increase the overall network security is a good thing. Hopefully they will enable automatic checking and downloading of critical software updates as well. The without notice part is perhaps somewhat unsettling. I can appreciate that attempting to explain this type of change to the AOL user base would be challenging, but I'd submit that third-party software making OS changes like this without the user's knowledge could be thin ice territory. Where is the line drawn once this path is chosen? -Terry
RE: How long much advanced notice do ISPs need to deploy IPv6?
Christian Kuhtz wrote: So, since there won't be a flag day, ... Maybe that's the point. The notion of Internet flag days has largely disappeared as the Internet's ubiquity and criticality have increased. There won't be flag days for IPv6, S(o)BGP, BGP-5, etc. So what's a company like Verisign to do when they want to substantially change the way the COM and NET zones work? (And is the answer different if they want to make these changes solely for their own financial gain?) If an incremental rollout isn't possible here, then folks end up in the fairly rare position of trying to figure out how to roll out a significant change that will affect the entire Internet at what will essentially be the flip of a switch. Clearly, pulling a Verisign and doing it without notifying anybody beforehand isn't the right way. But this alone doesn't make it much easier to decide what *is* the right way. -Terry
RE: Pitfalls of _accepting_ /24s
jlewis wrote: On the topic of announcing PA /24's, what procedures do you take to make sure that a new customer who want's to announce a few PA (P being one or more P's other than yourself) IP space is legit and should be announcing that IP space? I'm also interested in hearing current practices on this for PA space, PI space, or whatever. With UUNet and Qwest all I've had to do is make a phone call. I don't know whether or not whois was checked before the changes were made. I think this is important because what seems to be the current, fairly-lax policies on this negates some of the benefit of edge anti-spoof filtering. If, for example, it's quick easy to contact an ISP posing as a customer (or maybe the customer is doing the evil deeds themselves, so no posing is necessary) and get IP block X allowed through the ISP's BGP/anti-spoof filters for that customer, what good have the filters done? If we want ISPs to put forth the effort to deploy filters on all their edge links, it seems silly for it to be so easy for one to socially engineer their spoofed packets right through them. Personally, I just check whois, and if it looks legit, I'll listen to those routes and even create their route objects as necessary, since some of our upstreams require that. If everyone checked whois it would at least put an end to the unencouraging amount of unallocated prefixes one can find in the BGP tables at any given time. But it's also not difficult for someone with bad intentions to find space that is allocated per whois but not advertised by anyone. So it seems like additional verification steps may be needed if we're serious about wanting to put an end to spoofed packets. -Terry
RE: Block all servers?
This internet draft is available at: http://quimby.gnus.org/internet-drafts/draft-aboba-nat-ipsec-04.txt Ken Emery wrote: I can't figure out if anything happened with this draft (I'm guessing nothing went on). The draft expired on December 1, 2001. IPSec NAT Traversal is still being standardized, but has already been implemented in a good number of products. Current drafts: http://www.ietf.org/internet-drafts/draft-ietf-ipsec-nat-t-ike-07.txt http://www.ietf.org/internet-drafts/draft-ietf-ipsec-udp-encaps-06.txt http://www.ietf.org/internet-drafts/draft-ietf-ipsec-nat-reqts-05.txt Jon Lewis wrote: But why all this talk of NAT? Even if we all universally deployed it on monday, it wouldn't solve the problem. All it would do is keep the spammer/hackers from turning grandma's PC into a web server/proxy. As well as preventing infection from worms like Blaster, and so forth. It's hard to imagine one solution solving the entire laundry list of problems. One step at a time. That being said, NAT does break stuff and as has been mentioned, filtering is certainly possible without having to bring NAT into the mix. Microsoft assures us that the Windows firewall will be enabled by default starting with WinXP patches early next year. How easy will it be to turn it off? Will a virus be able to do it for you? -Terry
BellSouth prefix deaggregation (was: as6198 aggregation event)
More on this - Two of BellSouth's AS's (6197 6198) have combined to inject around 1,000 deaggregated prefixes into the global routing tables over the last few weeks (in addition to their usual load of ~600+ for a total of ~1,600). This does indeed appear to be having an operational impact on some folks, an example of which is here: http://isp-lists.isp-planet.com/isp-bgp/0310/msg00059.html The vast majority (if not all) of these prefixes are covered within aggregates announced by BellSouth AS6389, which acts as an upstream to these and around 20 other BellSouth AS's. (These other AS's combine for another ~700 deaggregated announcements, meaning that BellSouth may currently be advertising more deaggregated prefixes into the global routing tables than any other entity.) Some of these AS's appear to use Qwest as backup transit, so presumably the motive behind the vast deaggregation is failover. Is there a better way of achieving this than forcing the Internet to store ~2,300 extra routes? Can anyone from BellSouth comment? What if a few other major ISPs were to add a thousand or so deaggregated routes in a few weeks time? Would there be a greater impact? (Note: The above numbers are based on data from cidr-report.org. Some other looking glasses were also checked to see if cidr-report.org's view of these AS's is consistent with the Internet as a whole. This appears to be the case, but corrections are welcome.) -Terry -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Terry Baranski Sent: Sunday, October 05, 2003 3:01 PM To: 'James Cowie'; [EMAIL PROTECTED] Subject: RE: as6198 aggregation event James Cowie wrote: On Friday, we noted with some interest the appearance of more than six hundred deaggregated /24s into the global routing tables. More unusually, they're still in there this morning. AS6198 (BellSouth Miami) seems to have been patiently injecting them over the course of several hours, between about 04:00 GMT and 08:00 GMT on Friday morning (3 Oct 2003). If you look at the 09/19 and 09/26 CIDR Reports, BellSouth Atlanta (AS6197) did something similar during this time period -- they added about 350 deaggregated prefixes, most if not all /24's. Usually when we see deaggregations, they hit quickly and they disappear quickly; nice sharp vertical jumps in the table size. This event lasted for hours and, more importantly, the prefixes haven't come back out again, an unusual pattern for a single-origin change that effectively expanded global tables by half a percent. That AS6197's additions are still present isn't encouraging. -Terry
RE: Re[2]: CCO/cisco.com issues.
We've been handling a multi-vector DDoS - 40-byte spoofed SYN-flooding towards www.cisco.com Now that they've come for cisco, maybe law enforcement, network operators, and router vendors will all get their $h!t together and do something to put a stop to these DDoS attacks that have been going on in various forms for several years. Maybe this will have the positive effect of motivating Cisco to do more to encourage best practices such as edge anti-spoof filtering. To begin with, Barry Green's presentations on these issues are hidden away on his/Cisco's FTP server (ftp://ftp-eng.cisco.com/cons/) -- maybe it would be beneficial to put them (along with write-ups) in an easily-accessible and often-visited area of the main site where people will see them. These issues aren't just for ISPs: if edge networks would filter their borders, ISPs wouldn't have to do it for them. (Or in most cases, fail to do it for them.) -Terry
RE: as6198 aggregation event
James Cowie wrote: On Friday, we noted with some interest the appearance of more than six hundred deaggregated /24s into the global routing tables. More unusually, they're still in there this morning. AS6198 (BellSouth Miami) seems to have been patiently injecting them over the course of several hours, between about 04:00 GMT and 08:00 GMT on Friday morning (3 Oct 2003). If you look at the 09/19 and 09/26 CIDR Reports, BellSouth Atlanta (AS6197) did something similar during this time period -- they added about 350 deaggregated prefixes, most if not all /24's. Usually when we see deaggregations, they hit quickly and they disappear quickly; nice sharp vertical jumps in the table size. This event lasted for hours and, more importantly, the prefixes haven't come back out again, an unusual pattern for a single-origin change that effectively expanded global tables by half a percent. That AS6197's additions are still present isn't encouraging. -Terry
RE: Is there anything that actually gets users to fix their computers?
Daniel Karrenberg wrote: There is that too; but I have frequently observed people not doing it even when provided detailed step-by-step instructions. On the other hand they would proceed relatively quickly once it stopped working, e.g. the Internet plug was pulled. Some of them would use the instructions provided, others would get help; but not before it stopped owrking. Indeed. It seems to be a motivation problem. Also, using the net registering system we posted a virus alert and made information available, said Cunningham. Most people probably skipped through it though. Obviously, this is by no means specific to computer patching. People are either busy, lazy, apathetic, etc. Most don't pay attention until they're forced to; i.e., when their system stops working because a virus broke it or because their network access is shut off. You can ask nicely or post warnings a billion times to no avail. Human nature, perhaps. -Terry
RE: What were we saying about edge filtering?
Sean Donelan wrote: It gets even worse. Cisco has hard-coded the list of Bogons into some of its latest low-end IOS versions as part of its auto-secure feature. Yes, Cisco includes warnings in the manual the user should check the official list at IANA; but I also know the power of defaults. People upgrade their IOS versions even less often then they update their Windows boxes. So we're going to see chunks of the net blocked depending on the release date of versions of IOS. Adam Debus wrote: Do you have a reference page as to what platforms/releases/release trains that is being applied to? Seems like it might be a handy list to have bookmarked. :) Per http://www.cisco.com/en/US/products/sw/iosswrel/ps5187/products_feature_ guide09186a008017d101.html, it was introduced in 12.3 mainline. It's anyone's guess where it will end up from there but note that it's already in a service provider train (12.2(18)S). So we may (or probably will?) end up with ISP's using the bogon-list feature as well. If one upgrades from version A of Autosecure-enabled IOS to version B of Autosecure-enabled IOS, will the bogon-list ACLs in the device's configuration be automatically updated? Or will the user have to disable and then re-enable Autosecure? Is this progress? Or is this something that seemed like a good idea at the time? -Terry
RE: On the back of other 'security' posts....
the rest of the paper is also germane to this thread. just fya, we keep rehashing the UNimportant part of this argument, and never progressing. (from this, i deduce that we must be humans.) Ok, so we seem to have a general agreement that anti-spoof BGP prefix filtering on all standard customer edge links is a worthwhile practice. Now what? Is there any hope of this ever happening on a very large scale without somehow being mandated? (Not that it necessarily should be mandated.) How much success have Barry Green and co. had? Is there something the rest of us could be doing?
RE: On the back of other 'security' posts....
On Sunday, August 31, 2003 8:26 AM Stephen J. Wilcox wrote: On Sat, 30 Aug 2003, Terry Baranski wrote: In what instances is blocking spoofed traffic at the edge not feasible? (Spoofed as in not sourced from one of the customer's netblocks.) Where the customer is not a basic end user.. an ISP for example who may be transiting traffic from netblocks that are not theirs. I've been using the term edge to refer to a standard customer; i.e., not an ISP. I tend to think of ISP - ISP links as borders, but I guess the term only applies to peers. But in any case, if all ISPs put anti-spoof filters on standard customer edge links as well as their own upstream links, is there any need for such filters anywhere else? It might be compared to deploying protocol extensions such as S(o)BGP: the benefit gained correlates with ubiquity of the deployment. We still have the other problem where a lot of large networks are using RFC1918 addresses that do not get NAT'd thus filtering will break pMTU.. this is an issue in my experience mainly for those who host websites, altho many of those are filtering their own packets anyway and have broken sites! Fair enough, though most seem to consider this a broken design practice. If one of the side effects of anti-spoof filtering is that broken networks break some more, maybe that's tolerable. -Terry
RE: What do you want your ISP to block today?
The problem isn't Microsoft's products or the knowledge of the consumer. The problem lies in the ISPs' unwillingness to make this issue disappear or at least reduce it dramatically, said Cooper. This is a disturbing viewpoint. Next thing you know we'll be blaming ISP's for file sharing... -Terry
RE: On the back of other 'security' posts....
Owen DeLong wrote: The ISPs aren't who should be sued. The people running vulnerable systems generating the DDOS traffic and the company providing the Exploding Pinto should be sued. An ISPs job is to forward IP traffic on a best effort basis to the destination address contained in the header of the datagram. Any other behavior can be construed as a breach of contract. Sure, blocking spoofed traffic in the limited cases where it is feasible at the edge would be a good thing, but, I don't see failure to do so as negligent. In what instances is blocking spoofed traffic at the edge not feasible? (Spoofed as in not sourced from one of the customer's netblocks.) Where exactly do you think that the duty to care in this matter would come from for said ISP? Isn't the edge by far the easiest and most logical place to filter spoofed packets? What are the good reasons not to do so? Again, I just don't see where an ISP can or should be held liable for forwarding what appears to be a correctly formatted datagram with a valid destination address. I guess correctly formatted is a relative term. When *isn't* a packet with a spoofed source IP address guaranteed to be illegitimate? Maybe such packets shouldn't be considered correct. This is the desired behavior and without it, the internet stops working. The Internet stops working when legitimate packets aren't forwarded. Spoofed packets don't fall into this category. The problem is systems with consistent and persistent vulnerabilities. One software company is responsible for most of these, and, that would be the best place to concentrate any litigation aimed at fixing the problem through liquidated damages. I don't think it's appropriate to point the finger at one entity here. Lots of folks can play a part in helping out with this problem. That spoofed packets often originate from compromised hosts running Microsoft software doesn't justify ISPs standing around with their hands in their pockets if there are reasonably simple measures they can take to prevent such packets from ever getting past their edge routers. If edge filtering isn't considered a reasonably simple thing to do, I'd like to hear the reasons why. -Terry
RE: Lazy Engineers and Viable Excuses
If folks want to filter, please, please, PLEASE, employ IRR infrastructure and filter customers *AND* peers explicitly. If your vendors have issues with this, push them to fix it. Then you don't have to worry about bogons, max-prefixes, route hijacking, de-aggregation, or... Then we can worry about IRR infrastructure hardening and accuracy and derive explicit data plane filters from the output, as well as other tangible benefits. Is it really that hard? I can see not filtering peers if the hardware can't handle it, but there doesn't appear to be such a good excuse for not edge filtering. If Barry Green is listening out there, I'd be interested to hear how successful he and his team have been at convincing ISPs to do this. I know he's been on his ISP Security Best Practices world tour for quite a while now, and am curious if he's found it more difficult to get edge filtering implemented vs. other security measures (and if so, why) or if it's just security in general that's difficult to get ISPs to do. Also, are recommendations given for how edge filters should be maintained? This was discussed here a short while ago but I don't think a broad consensus was reached. The mere existence of the filters is nice to prevent AS7007-like incidents but does little to prevent other bad things such as address hijacking if the customer (or someone posing as the customer?) can call the ISP and get holes punched in a filter for address blocks that he/she has no business announcing. It seems that a common practice amongst ISPs who do filter on the edge is to blindly punch holes in these filters when asked without somehow validating the request. Does this negate a significant portion of the advantages gained by edge filtering or are we primarily concerned with accidental/malicious route table leaking at this point? -Terry
RE: Ettiquette and rules regarding Hijacked ASN's or IP space?
At the moment there is no clear procedure for any ISP to follow to even get a best guess as to whether an advertisement should be accepted or not. What about requiring that a route appear in an RIR database period? Maybe that would be a good start. It's easy enough to do but virtually no one seems to do it. We've seen how lengthy The CIDR Report's list of unregistered (but nonetheless advertised) routes is -- why are these advertisements being accepted? This doesn't directly address hijacking, but it seems to me that there's no reason to spend time looking for old, unused, potentially hijackable address blocks if just about any ISP out there will accept your announcements of blocks that aren't even allocated. (Note: I'm not talking about IANA Reserved space.)
RE: Symantec detected Slammer worm hours before
Apologies if this is old news. It's from Thursday, but I didn't see it until today. Symantec comes clean Somewhat: http://www.theregister.co.uk/content/56/29406.html -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sean Donelan Sent: Thursday, February 13, 2003 12:00 PM To: [EMAIL PROTECTED] Subject: Symantec detected Slammer worm hours before Wow, Symantec is making an amazing claim. They were able to detect the slammer worm hours before. Did anyone receive early alerts from Symantec about the SQL slammer worm hours earlier? Academics have estimated the worm spread world-wide, and reached its maximum scanning rate in less than 10 minutes. I assume Symantec has some data to back up their claim. http://enterprisesecurity.symantec.com/content.cfm?articleid=1985EID=0 For example, the DeepSight Threat Management System discovered the Slammer worm hours before it began rapidly propagating. Symantec's DeepSight Threat Management System then delivered timely alerts and procedures, enabling administrators to protect against the attack before their environment was compromised.