Re: Abuse response [Was: RE: Yahoo Mail Update]
On Tue, Apr 15, 2008 at 08:49:39PM -0400, Martin Hannigan wrote: Abuse desk is a $0 revenue operation. Is it not obvious what the issue is? Two points, the first of which is addressed to this and the second of which is more of a recommended attitude. 1. There is no doubt that many operations consider it so, but it's really not. Operations which don't adequately deal with abuse issues are going to incur tangible and intangible costs (e.g., money spent cleaning up local messes and getting off numerous blacklists, loss of business due to reputation, etc.). Those costs are likely to increase as more and more people become increasingly annoyed with abuse-source operations and express that via software and business decisions. I'll concede that this is really difficult to measure (at the moment) but it's not zero. 2. When one's network operation abuses someone (or someone else's operation), you owe them a fix, an explanation, and an apology. After all, it happened in your operation on your watch, therefore you're personally responsible for it. And when someone in that position -- a victim of abuse -- has magnanimously documented the incident and reported it to you, thus providing you with free consulting services -- you owe them your thanks. After all, they caught something that got by you -- and they've shared that with you, thus enabling you to run a better operation, which in turn means fewer future abuse incidents, which in turn means lower tangible and intangible costs. And far more importantly, it means being a better network neighbor, something we should all be working toward all the time. ---Rsk
Re: Abuse response [Was: RE: Yahoo Mail Update]
On Wed, Apr 16, 2008 at 11:07:42AM +0100, [EMAIL PROTECTED] wrote: If people had succeeded in cleaning up the abuse problems in 1995 when the human touch was still feasible, we would not have the situation that we have today. Automation is the only way to address the flood of abuse email, the huge number of people originating abuse, and the agile tactics of the abusers. I agree with this and with pretty much everything else you wrote. But... If an operation is permitting itself to be such a systemic, persistent source of abuse that the number of abuse reports it's receiving (which everyone knows is tiny fraction of the number it *could* be receiving) requires automation...isn't that a pretty good sign that whatever's being done to control abuse isn't working? The solution to that isn't to put in place higher levels of automation: the solution to to that is to *solve the underlying problems* so that higher levels of automation aren't necessary. ---Rsk
Re: Abuse response [Was: RE: Yahoo Mail Update]
I largely concur with the points that Paul's making, and would like to augment them with these: - Automation is far less important than clue. Attempting to compensate for lack of a sufficient number of sufficiently-intelligent, experienced, diligent staff with automation is a known-losing strategy, as anyone who has ever dealt with an IVR system knows. - Trustability is unrelated to size. There are one-person operations out there that are obviously far more trustable than huge ones. - Don't built what you can't control. Abuse handling needs to be factored into service offerings and growth decisions, not blown off and thereby forcibly delegated to the entire rest of the Internet. - Poorly-desigged and poorly-run operations markedly increase the workload for their own abuse desks. - A nominally competent abuse desk handles reports quickly and efficiently. A good abuse desk DOES NOT NEED all those reports because it already knows. (For example, large email providers should have large numbers of spamtraps scattered all over the 'net and should be using simple methods to correlate what arrives at them to provide themselves with an early heads up. This won't catch everything, of course, but it doesn't have to.) ---Rsk
Re: Abuse response
On Tue, Apr 15, 2008 at 02:01:26PM +0100, [EMAIL PROTECTED] wrote: - Automation is far less important than clue. Attempting to compensate for lack of a sufficient number of sufficiently- intelligent, experienced, diligent staff with automation is a known-losing strategy, as anyone who has ever dealt with an IVR system knows. Given that most of us use routers instead of pigeons to transport our packets, I would suggest that railing against automation is a lost cause here. I'm not suggesting that automation is bad. I'm suggesting that trying to use it as a substitute for certain things, like clue, is bad. When used *in conjunction with clue*, it's marvelous. This sounds like a blanket condemnation of the majority of ISPs in today's Internet. Yes, it is. I regard it as everyone's primary responsibility to ensure that their operation isn't a (systemic, persistent) operational hazard to the entire rest of the Internet. That's really not a lot to ask... and there was a time when it wasn't necessary to ask, because everyone just did it. Where has that sense of professional responsibility gone? Why is it that spamtraps are not mentioned at all in MAAWG's best practices documents except the one for senders, i.e. mailing list operators? I can't answer that, as I didn't write them. But everyone (who's been paying attention) has known for many years that spamtraps are useful for catching at least *some* of the problem, with the useful feature that the worse the problem is, the higher the probability this particular detection method will work. Another example I'll give of a loose-but-useful detection method is that any site which does mass hosting should be screening all new customer domains for patterns like pay.*pal.*\. and \.cit.*bank.*\. and flagging for human attention any that match. Again, this won't catch everything, but it will at least give a fighting chance of catching *something*, thus hopefully pre-empting some abuse before it happens and thus minimizing cleanup labor/cost/impact. In addition, this sort of thing actively discourages abusers: sufficiently diligent use of many tactics like this causes them to stay away in droves, which in turn reduces abuse desk workload. But (to go back to the first point) none of it works without smart, skilled, empowered, people, and while automation is an assist, it's no substitute. ---Rsk
Re: Abuse response [Was: RE: Yahoo Mail Update]
On Tue, Apr 15, 2008 at 11:22:59AM -0400, William Herrin wrote: There's a novel idea. Require incoming senior staff at an email company to work a month at the abuse desk before they can assume the duties for which they were hired. My hunch says that's a non-starter. It also doesn't keep qualified folks at the abuse desk; it shuffles them through. Require all technical staff and their management to work at the abuse desk on a rotating basis. This should provide them with ample motivation to develop effective methods for controlling abuse generation, thus reducing the requirement for abuse mitigation, thus reducing the time they have to spend doing it. ---Rsk
Re: Yahoo Mail Update
On Sun, Apr 13, 2008 at 03:55:13PM -0500, Ross wrote: Again I disagree with the principle that this list should be used for mail operation issues but maybe I'm just in the wrong here. I don't think you're getting what I'm saying, although perhaps I'm not saying it very well. What I'm saying is that operational staff should be *listening* to relevant lists (of which this is one) and that operational staff should be *talking* on lists relevant to their particular issue(s). Clearly, NANOG is probably not the best place for most SMTP or HTTP issues, but some of the time, when those issues appear related to topics appropriate for NANOG, it might be. The rest of the time, the mailop list is probably more appropriate. While I prefer to see topics discussed in the best place (where there is considerable debate over what that might be) I think that things have gotten so bad that I'm willing to settle for, in the short term, a place, because it's easier to redirect a converation once it's underway that it seems to be to start one. For example: the silence from Yahoo on this very thread is deafening. ---Rsk
Re: the O(N^2) problem
On Mon, Apr 14, 2008 at 01:41:50PM +, Edward B. DREGER wrote: When one accepts an email[*], one wishes for some sort of _a priori_ information regarding message trustworthiness. DKIM can vouch for message authenticity, but not trust. At the moment, this problem can't be solved on an Internet scale, because there are on the order of 10e8 fully-compromised systems out there. Many different estimates have been proferred over the years; the most recent I've seen is from Rick Wesson at Support Intelligence, who offered 40% as his guesstimate; if there are 800M systems on the 'net, that'd be about 320M. But the exact number is unknowable and in some sense unimportant: the difference between 128M and 172M doesn't matter for the purpose of this discussion. And I believe there is widespread concurrence that whatever the number is, it's going up. The new owners of those systems can do anything with them they want, including forging (and cryptographically signing) outbound mail messages using any SMTP authorization credentials present on it, or any SMTP access implied by its network location(s). (They can also, if they wish, arrange to conceal incoming replies to this traffic from the former owners.) Until that problem's solved (and I don't see any solution for it on the horizon) then it will undercut any number of interesting approaches worthy of significant discussion, not just this one. It's the elephant in the room, and until it's banished, it will keep getting in the way. ---Rsk
Re: Yahoo Mail Update
On Sun, Apr 13, 2008 at 12:58:59AM -0500, Ross wrote: On Thu, Apr 10, 2008 at 8:54 PM, Rich Kulawiec [EMAIL PROTECTED] wrote: I heartily second this. Yahoo (and Hotmail) (and Comcast and Verizon) mail system personnel should be actively participating here, on mailop, on spam-l, etc. A lot of problems could be solved (and some avoided) with some interaction. Why should large companies participate here about mail issues? Last I checked this wasn't the mailing list for these issues: It's got nothing to do with size (large); Joe's ISP in Podunk should be on this lists as well. And one of the reasons I suggested multiple lists is that each has its own focus, so those involved with the care and feeding of mail systems should probably be on a number of them, in order to interact with something approximating the right set of peers at other operations. (Of course not all lists are appropriate for all topics.) But lets just say for a second this is the place to discuss company xys's mail issue. What benefit do they have participating here? Likely they'll be hounded by people who have some disdain for their company and no matter what they do they will still be evil or wrong in some way. They're more likely to be hounded by people who have disdain for their incompetence and the resulting operational issues they impose on their peers. But if they're reluctant to face the unhappiness of their peers -- those whose networks, systems and users are abused on a daily basis and who thus have ample reason to be unhappy -- then maybe they should try something different, such as doing their jobs properly. It is easy for someone who has 10,000 users to tell someone who has 50 million users what to do when they don't have to work with such a large scale enterprise. This is mythology. Someone who can *competently* run a 10,000 user operation will have little-to-no difficulty running a 50 million user operation. (In some ways, the latter is considerably easier.) It's not a matter of the size of anyone's operation, it's a matter of how well it's run, which in turn speaks to the knowledge, experience, diligence, etc. of those running it. ---Rsk
Re: Problems sending mail to yahoo?
On Sun, Apr 13, 2008 at 08:04:12PM -0400, Barry Shein wrote: A number of things that are true, including: I say the core problem in spam are the botnets capable of delivering on the order of 100 billion msgs/day. But I say the core problem is deeper. Spam is merely a symptom of an underlying problem. (I'll admit that I often use the phrase spam problem but that's somewhat misleading.) The problem is pervasive poor security. Those botnets would not exist were it not for nearly-ubiquitous deployment of an operating system that cannot be secured -- and we know this because we've seen its own vendor repeatedly try and repeatedly fail. But a miserable excuse for an OS is just one of the causes; others have been covered by essays like Marcus Ranum's Six Dumbest Ideas in Security, so I won't attempt to enumerate them all. That underlying security problem gives us many symptoms: spam, phishing, typosquatting, DDoS attacks, adware, spyware, viruses, worms, data loss incidents, web site defacements, search engine gaming, DNS cache poisoning, and a long list of others. Dealing with symptoms is good: it makes the patient feel better. But it shouldn't be confused with treatment of the disease. Even if we could snap our fingers and stop all spam permanently tomorrow (a) it wouldn't do us much good and (b) some other symptom would evolve to fill its niche in the abuse ecosystem. A secondary point that actually might be more important: We (and I really do mean 'we because I've had a hand in this too) have compounded our problems by our collective response -- summed up beautifully on this very mailing list a while back thusly: If you give people the means to hurt you, and they do it, and you take no action except to continue giving them the means to hurt you, and they take no action except to keep hurting you, then one of the ways you can describe the situation is it isn't scaling well. --- Paul Vixie on NANOG We need to hold ourselves accountable for the security problems in our own operations, and then we need to hold each other accountable. This is very different from our strategy to date -- which, I submit, has thoroughly proven itself to be a colossal failure. ---Rsk
Re: /24 blocking by ISPs - Re: Problems sending mail to yahoo?
On Sat, Apr 12, 2008 at 09:36:43AM -0700, Matthew Petach wrote: *heh* And yet just last year, Yahoo was loudly dennounced for keeping logs that allowed the Chinese government to imprison political dissidents. Talk about damned if you do, damned if don't... But those are very different kinds of logs -- with personally identifiable information. I see a sharp difference between those and logs which record (let's say) SMTP abuse incidents/attempts by originating IP address. ---Rsk
Re: Problems sending mail to yahoo?
On Thu, Apr 10, 2008 at 11:58:05PM -0400, Rob Szarka wrote: I report dozens of spams from my personal account alone every day and never receive anything other than automated messages claiming to have dealt with the same abuse that continues around the clock or, worse, bogus/clueless claims that the IP in question is not theirs and suggestions that I check the same ARIN database that I used to confirm the responsible party in the first place. I gave up sending abuse reports to Yahoo (and Hotmail) many years ago. All available evidence strongly indicates that there is nobody there who understands them, is capable of taking effective action, or cares to take any effective action. That evidence includes not just their complete failure to control outbound abuse, but their ill-advised and ineffective attempts to control inbound abuse (as we see in this thread), their complete failure to participate in abuse forums such as Spam-L, their complete failure to shut down spammer/phisher domains they're hosting, and their complete failure to shut down spammer/phisher dropboxes they're providing. Sadly, Google's Gmail appears to be on the first steps down this same path. I had hoped for a display of markedly higher clue level from them, but -- for whatever reason -- it hasn't manifested itself yet. So in the short term, advising customers that Yahoo's and Hotmail's freemail services are of very poor quality and should never be relied on for anything, and that Gmail is a better choice, is probably viable. In the long term, though, I think it may only delay the inevitable. ---Rsk
Re: spam wanted :)
On Thu, Apr 10, 2008 at 06:32:53PM +0900, Randy Bush wrote: for a measurement experiment, i would like O(100k) *headers* from spam from europe and a similar sample from the states. Request for clarification: do you mean spam originating at IP addresses believed to be in Europe or spam received at a mail server located in Europe or spam putatively from domains in Europe or something else? ---Rsk
Re: Problems sending mail to yahoo?
On Thu, Apr 10, 2008 at 01:30:06PM -0400, Barry Shein wrote: Is it just us or are there general problems with sending email to yahoo in the past few weeks? It's not you. Lots of people are seeing this, as Yahoo's mail servers are apparently too busy sending ever-increasing quantities of spam to have to accept inbound traffic. Sufficiently persistent and lucky people have sometimes managed to penetrate the outer clue-resistant shells of Yahoo and effect changes, but some of those seem ineffective and temporary. There doesn't seem to be any simple, universal fix for this other than advising people that Yahoo's email service is already miserable and continues to deteriorate, and hoping that they migrate elsewhere. ---Rsk
Re: Yahoo Mail Update
On Thu, Apr 10, 2008 at 05:51:23PM -0700, chuck goolsbee wrote: Thanks for the update Jared. I can understand your request to not be used as a proxy, but it exposes the reason why Yahoo is thought to be clueless: They are completely opaque. They can not exist in this community without having some visibity and interaction on an operational level. I heartily second this. Yahoo (and Hotmail) (and Comcast and Verizon) mail system personnel should be actively participating here, on mailop, on spam-l, etc. A lot of problems could be solved (and some avoided) with some interaction. ---Rsk
Re: default routes question or any way to do the rebundant
Can someone put this in a digest for me? eg - Original Message - From: [EMAIL PROTECTED] [EMAIL PROTECTED] To: Barry Shein [EMAIL PROTECTED] Cc: nanog@merit.edu nanog@merit.edu Sent: Fri Mar 21 16:44:39 2008 Subject: Re: default routes question or any way to do the rebundant On Fri, Mar 21, 2008 at 4:29 PM, Barry Shein [EMAIL PROTECTED] wrote: Is this for real? Someone asks a harmless question about setting up multiple default routes, not about Barack Obama or whether the moon is made of green cheese, but about default routes. Then 10 people decide to respond that this isn't appropriate for nanog. Then 25 people decide to dispute that. Then 50 people are arguing (ok maybe I exaggerate but just a little) about it. So the person who asked the original question feels bad and apologizes. And 5 people decide to tell her there's nothing to apologize for. And 10 people dispute that...and...what next? Oh, right, and next I feel an urge to write this idiotic meta-meta-meta-note. I think psychologists have a term for this, chaotic instability disorder or something like that. Maybe what we need are NANOG GREETERS! Hello, welcome to Nanog, can we help you find something? Hello, welcome to Nanog, can we help you find something?... Blue light special in slot 5? V6 only STM64's now half price! personal opinion I dont think that there's any issue at all to be honest. NANOG isn't just for the clued. /personal opinion Best, Marty
Re: mtu mis-match
On Wed, Mar 19, 2008 at 12:05:19PM -0700, ann kok wrote: Some DSL clients, some are working fine. (browsing...ping ...) Some DSL clients have this problem they can't browse the sites. they can ssh the host but couldn't run the command in the shell prompt ping packet are working fine (no packet lost) Why? but I still don't know why mtu can cause this problem Path MTU discovery failures are one of the possible causes for what you're seeing. (For example, you can establish an ssh connection to a host, because none of the packets exceed the path MTU. But as soon as you run a command that generates a substantial amount of output, the connection will appear to hang because the remote host is repeatedly retrying to send the same data because it doesn't see an ack while the local host is never seeing the data because it exceeds the MTU). This is often caused by overly-aggressive filtering of ICMP. I recommend taking a look at http://www.znep.com/~marcs/mtu/ as well as http://www.cymru.com/Documents/icmp-messages.html and then checking the configurations of network devices to make sure that ICMP type 3 code 4 traffic isn't being blocked. ---Rsk
Re: YouTube IP Hijacking
Jake Blues mode I hate Cyber Jihads! /Jake Blues mode - Original Message - From: [EMAIL PROTECTED] [EMAIL PROTECTED] To: Neil Fenemor [EMAIL PROTECTED] Cc: Will Hargrave [EMAIL PROTECTED]; nanog@merit.edu nanog@merit.edu Sent: Sun Feb 24 16:06:50 2008 Subject: RE: YouTube IP Hijacking Clearly, they are incensed by youtube content, so what makes anyone think that they would not be trying to engage in a case of Cyber-Jihad? I hosted the site that was rated #1 on Google for the Jyllands Posten (di2.nu) cartoons when it was a current issue, and I STILL get lots of script kiddie DOS from the Islamic world. I generally don't assume malice when mere incompetence will suffice, but in the case of the Islamic world, they've proved themselves malicious towards the non-Islamic world often, and violently, enough, that I don't believe they deserve that presumption of innocence any more. In either case, the correct COA is to filter all advertisements with AS 17557 in the path, until they fix the routes they are advertising, and let us know how they plan on making sure this doesn't happen again. -Original Message- From: Neil Fenemor [mailto:[EMAIL PROTECTED] Sent: Sunday, February 24, 2008 1:01 PM To: Tomas L. Byrnes Cc: Will Hargrave; nanog@merit.edu Subject: Re: YouTube IP Hijacking While they are deliberately blocking Youtube nationally, I suspect the wider issue has no malice, and is a case of poorly constructed/ implemented outbound policies on their part, and poorly constructed/ implemented inbound polices on their upstreams part. On 25/02/2008, at 9:49 AM, Tomas L. Byrnes wrote: Pakistan is deliberately blocking Youtube. http://politics.slashdot.org/article.pl?sid=08/02/24/1628213 Maybe we should all block Pakistan. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Will Hargrave Sent: Sunday, February 24, 2008 12:39 PM To: [EMAIL PROTECTED] Subject: Re: YouTube IP Hijacking Sargun Dhillon wrote: So, it seems that youtube's ip block has been hijacked by a more specific prefix being advertised. This is a case of IP hijacking, not case of DNS poisoning, youtube engineers doing something stupid, etc. For people that don't know. The router will try to get the most specific prefix. This is by design, not by accident. You are making the assumption of malice when the more likely cause is one of accident on the part of probably stressed NOC staff at 17557. They probably have that /24 going to a gateway walled garden box which replies with a site saying 'we have banned this', and that /24 route is leaking outside of their AS via PCCW due to dodgy filters/communities. Will Neil Fenemor FX Networks
Re: [EMAIL PROTECTED]
On Fri, Jan 18, 2008 at 09:43:35AM -0800, Mike Lyon wrote: Could someone who reads (or is suppose to read...) empty the mailbox over at [EMAIL PROTECTED] It would appear that little has changed: The following addresses had delivery problems: [EMAIL PROTECTED] Permanent Failure: 522_mailbox_full;_sz=629145594/629145600_ct=70494/10 Delivery last attempted at Sat, 25 Oct 2003 12:52:40 - Remember, though -- per Comcast's official position -- they take the spam problem seriously. --Rsk
Re: Creating a crystal clear and pure Internet
On Tue, Nov 27, 2007 at 09:38:40AM -0500, Sean Donelan wrote: Some people have compared unwanted Internet traffic to water pollution, and proposed that ISPs should be required to be like water utilities and be responsible for keeping the Internet water crystal clear and pure. Yes -- well, not unwanted IMHO, but abusive. (Much traffic that's unwanted is not abusive. For example, in the view of some readers of this mailing list, some of the longer/more caustic/repetitive debates might very well be unwanted. But that traffic is clearly not abusive.) Several new projects have started around the world to achieve those goals. ITU anti-botnet initiative [snip France anti-piracy initiative Only the first one has anything to do with keeping the Internet clean; the second is a political cave-in to the copyright cartel. I see a (mostly) clear line between things that are abusive of the Internet, systems connected to it, and users of those systems and content that's unwanted, offensive, or claimed to be covered under someone's interpretation of IP law. The first category contains things like spam, phishing, spyware, spam/phishing/spyware support services (dns, web hosting, maildrops), DoS attacks, hijacked networks, etc. The second category contains things like porn, religion, politics, music, movies via whatever means are used to convey them (mail, web, p2p, etc.) all of which are certain to irritate someone, somewhere, and much of which could probably be construed (by a sufficiently creative legal practicioner) to infringe on somebody's IP. In my view, it's the responsibility of everyone on the net to do whatever they can to squelch the first. But they have no obligations at all when it comes to the second -- that way lies the slippery slope of content policing and censorship. ---Rsk
Re: unwise filtering policy from cox.net
On Wed, Nov 21, 2007 at 06:51:42AM +, Paul Ferguson wrote: Sure, it's an unfortunate limitation, but I hardly think it's an issue to hand-wave about and say oh, well. Suggestions? There are numerous techniques available for addressing this problem. Which one(s) to use depends on the site's mail architecture, so I'm not going to try to enumerate them all -- only to give a few examples. Example 1: exempt abuse@ address from all anti-* processing; just deliver it. All the MTA's I've worked with provide features to support this; it's also sometimes necessary to make that exemption elsewhere (e.g., in programs called invoked as milters). Oh, and don't greylist it either. Example 2: if using a multi-tier architecture (increasingly a good idea, as it insulates internal traffic from the beating often inflicted by external traffic) then re-route abuse@ mail to its own dedicated system (using a mechanism like the sendmail virtual user table or equivalent). Make that system something relatively impervious, and choose hardware that can be replaced quickly at low cost. (My suggestion: OpenBSD on a Sparc Ultra 2, and use mutt as the mail client. Keep a couple of spares in the basement, they're dirt-cheap.) ---Rsk
Re: Can P2P applications learn to play fair on networks?
I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered. A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could cache out at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense. Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them . I am told these methods are in fact covered by the DMCA but remember I am no lawyer. Feel free to reply direct if you want contacts Rich -- From: Sean Donelan [EMAIL PROTECTED] Sent: Sunday, October 21, 2007 12:24 AM To: nanog@merit.edu Subject: Can P2P applications learn to play fair on networks? Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else. If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem. The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other polite network applications. While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited. User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too. Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe. Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN port sampling is good enough to detect major issues. For the same reason, asymetric sampling is easier than requiring symetric (or synchronized) sampling. But it also means there will be a limit on the information available to make good and bad decisions. Out-of-band detection limits what controls network engineers can implement on the traffic. USENET has a long history of generating third-party cancel messages. IPS systems and even passive taps have long used third-party packets to respond to traffic. DNS servers been used to re-direct subscribers to walled gardens. If applications responded to ICMP Source Quench or other administrative network messages that may be better; but they don't.
Re: Can P2P applications learn to play fair on networks?
Frank, The problem caching solves in this situation is much less complex than what you are speaking of. Caching toward your client base brings down your transit costs (if you have any)or lowers congestion in congested areas if the solution is installed in the proper place. Caching toward the rest of the world gives you a way to relieve stress on the upstream for sure. Now of course it is a bit outside of the box to think that providers would want to cache not only for their internal customers but also users of the open internet. But realistically that is what they are doing now with any of these peer to peer overlay networks, they just aren't managing the boxes that house the data. Getting it under control and off of problem areas of the network should be the first (and not just future) solution. There are both negative and positive methods of controlling this traffic. We've seen the negative of course, perhaps the positive is to give the user what they want ..just on the providers terms. my 2 cents Rich -- From: Frank Bulk [EMAIL PROTECTED] Sent: Monday, October 22, 2007 7:42 PM To: 'Rich Groves' [EMAIL PROTECTED]; nanog@merit.edu Subject: RE: Can P2P applications learn to play fair on networks? I don't see how this Oversi caching solution will work with today's HFC deployments -- the demodulation happens in the CMTS, not in the field. And if we're talking about de-coupling the RF from the CMTS, which is what is happening with M-CMTSes (http://broadband.motorola.com/ips/modular_CMTS.html), you're really changing an MSO's architecture. Not that I'm dissing it, as that may be what's necessary to deal with the upstream bandwidth constraint, but that's a future vision, not a current reality. Frank -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rich Groves Sent: Monday, October 22, 2007 3:06 PM To: nanog@merit.edu Subject: Re: Can P2P applications learn to play fair on networks? I'm a bit late to this conversation but I wanted to throw out a few bits of info not covered. A company called Oversi makes a very interesting solution for caching Torrent and some Kad based overlay networks as well all done through some cool strategically placed taps and prefetching. This way you could cache out at whatever rates you want and mark traffic how you wish as well. This does move a statistically significant amount of traffic off of the upstream and on a gigabit ethernet (or something) attached cache server solving large bits of the HFC problem. I am a fan of this method as it does not require a large foot print of inline devices rather a smaller footprint of statics gathering sniffers and caches distributed in places that make sense. Also the people at Bittorrent Inc have a cache discovery protocol so that their clients have the ability to find cache servers with their hashes on them . I am told these methods are in fact covered by the DMCA but remember I am no lawyer. Feel free to reply direct if you want contacts Rich -- From: Sean Donelan [EMAIL PROTECTED] Sent: Sunday, October 21, 2007 12:24 AM To: nanog@merit.edu Subject: Can P2P applications learn to play fair on networks? Much of the same content is available through NNTP, HTTP and P2P. The content part gets a lot of attention and outrage, but network engineers seem to be responding to something else. If its not the content, why are network engineers at many university networks, enterprise networks, public networks concerned about the impact particular P2P protocols have on network operations? If it was just a single network, maybe they are evil. But when many different networks all start responding, then maybe something else is the problem. The traditional assumption is that all end hosts and applications cooperate and fairly share network resources. NNTP is usually considered a very well-behaved network protocol. Big bandwidth, but sharing network resources. HTTP is a little less behaved, but still roughly seems to share network resources equally with other users. P2P applications seem to be extremely disruptive to other users of shared networks, and causes problems for other polite network applications. While it may seem trivial from an academic perspective to do some things, for network engineers the tools are much more limited. User/programmer/etc education doesn't seem to work well. Unless the network enforces a behavor, the rules are often ignored. End users generally can't change how their applications work today even if they wanted too. Putting something in-line across a national/international backbone is extremely difficult. Besides network engineers don't like additional in-line devices, no matter how much the sales people claim its fail-safe. Sampling is easier than monitoring a full network feed. Using netflow sampling or even a SPAN
Re: Seeking UUNET/Level3 help re packet loss between Comcast Onvoy customers
This is resolved, though no one knows exactly why. If someone at Global Crossing has relevant logs of route flaps or somesuch, that might be interesting, but I can live with the mystery. Comcast advertises a specific route for the problem space, 71.63.128.0/17. Don't ask me why. Early yesterday morning, they flapped that route -- stopped advertising it, then started again. *Probably* shortly after that, and possibly as a consequence, our connectivity to the range of Comcast IP addresses typically assigned to residential customers in MN was restored. Our ISP, Onvoy, has three upstreams: Verizon, Global Crossing, and ATT. The route from 137.22/16 and 130.71/16 to Comcast typically took the path carleton-onvoy-global crossing-level3-att-comcast But the path from Comcast to us was simply comcast-att-onvoy-carleton For some time, Global Crossing was a mystery hop, because Onvoy was not terribly communicative, and Global Crossing was not showing up in traceroutes. Onvoy has let us know that this was could have been because the glbx/onvoy link was filtering ICMP due to DDoS attacks, but this seems wrong to me on a number of levels. Anyway, it's fixed. -- Rich Graves http://claimid.com/rcgraves Carleton.edu Sr UNIX and Security Admin CMC135: 507-646-7079 Cell: 952-292-6529
Seeking UUNET/Level3 help re packet loss between Comcast Onvoy customers
0 5006 7018 13367 13367 13367 13367 i * 76.113.224.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.133.0.0/19137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.133.0.0/17137.192.32.173 0 5006 3549 3356 13367 13367 13367 13367 13367 13367 i * 76.133.32.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.133.64.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.133.96.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.154.0.0/19137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.154.0.0/17137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.154.32.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 i * 76.154.64.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 13367 i * 76.154.96.0/19 137.192.32.173 0 5006 7018 13367 13367 13367 13367 13367 i * 209.162.0.0/18 137.192.32.173 0 5006 7018 13367 13367 13367 13367 i cisco7200#show ip bgp neighbors 192.42.152.218 routes | include 13367 * 24.31.0.0/19 192.42.152.218110 0 57 13367 i * 24.118.0.0/16192.42.152.218110 0 57 13367 i * 24.245.0.0/18192.42.152.218110 0 57 13367 i * 24.245.64.0/20 192.42.152.218110 0 57 13367 i * 66.41.0.0/16 192.42.152.218110 0 57 13367 i * 67.190.192.0/19 192.42.152.218110 0 57 13367 i * 67.190.224.0/19 192.42.152.218110 0 57 13367 i * 69.180.128.0/18 192.42.152.218110 0 57 13367 i * 70.89.196.0/22 192.42.152.218110 0 57 13367 i * 70.89.200.0/22 192.42.152.218110 0 57 13367 i * 71.193.64.0/19 192.42.152.218110 0 57 13367 i cisco7200#sho ip bgp 71.63.168.1 BGP routing table entry for 71.63.128.0/17, version 31179174 Paths: (1 available, best #1, table Default-IP-Routing-Table) Not advertised to any peer 5006 3549 3356 13367 13367 13367 13367 137.192.32.173 from 137.192.32.173 (172.30.0.5) Origin IGP, localpref 100, valid, external, best Community: 328073238 cisco7200#sho ip bgp 73.112.232.1 BGP routing table entry for 73.112.0.0/14, version 31179171 Paths: (1 available, best #1, table Default-IP-Routing-Table) Not advertised to any peer 5006 3549 3356 13367 13367 13367 13367 137.192.32.173 from 137.192.32.173 (172.30.0.5) Origin IGP, localpref 100, valid, external, best Community: 328073238 -- Rich Graves http://claimid.com/rcgraves Carleton.edu Sr UNIX and Security Admin CMC135: 507-646-7079 Cell: 952-292-6529
ATT / Time-Warner problem?
Anyone aware of an issue with ATT / Time-Warner? We're seeing traceroutes from ATT to TW die around 24.95.x.x (*.columbus.rr.com) Thanks, Rich
Re: Abuse procedures... Reality Checks
On Sat, Apr 07, 2007 at 05:12:19PM -0500, Frank Bulk wrote: If they're properly SWIPed why punish the ISP for networks they don't even punish? Since when is it punishment to refuse to extend a privilege that's been repeatedly and systematically abused? (You have of course, absolutely no right whatsoever to expect any services of any kind from anyone other than those you've contracted for. Everything beyond that is a privilege, generously furnished to you at the whim of those operating the service. It may be restricted or withdrawn at any time, for any reason, with or without notice to you. Now as a general rule, we all have chosen to furnish those services -- by default and without limitation. But that doesn't turn them into entitlements.) The word punish is completely inapplicable in this context. operate, that obviously belong to their business customers? Questions: 1. Is your name on it in any way, shape or form? (This includes allocations.) 2. Is it emitting abuse? If the answers are yes, then it's YOUR abuse. Trying to evade responsibility by claiming that it's one of our customers is just another pathetic excuse for incompetence. Of course, it doesn't hurt to copy the ISP or AS owner for abuse issues from a sub-allocated block -- you would hope that ISPs and AS owners would want to have clean customers. Unless of course the ISP or AS owner *are* the abuser under another name, or unless they're actively complicit. Both are quite common. Beyond that: any *competent* ISP or AS owner will already know about the abuse. They will have deployed measures designed to detect said abuse well before anyone else out there reports it to them. (Example: setting up their own spamtraps explicitly designed to catch their own customers.) By the time an external observer reports a problem to them, it should already be old news and already be well on its way to remediation. ---Rsk
Re: Abuse procedures... Reality Checks
On Tue, Apr 10, 2007 at 07:44:59AM -0500, Frank Bulk wrote: Comcast is known to emit lots of abuse -- are you blocking all their networks today? All? No. But I shouldn't find it necessary to block ANY, and wouldn't, if Comcast wasn't so appallingly negligent. ( I'm blocking huge swaths of Comcast space from port 25. This shouldn't really surprise anyone; Comcast runs what may well be the most prolific spam-spewing network in the world. I saw attempts from 80,000+ distinct IP addresses during January 2007 alone -- to a *test* mail server. I should have seen zero. The mitigation techniques for making that happen are well-known, have been well-known for years, and can be implemented easily by any competent organization.) This, by the way, should not be taken as indicative of either what I've done in the past or may do in the future. Nor should it be taken as indicative of what decisions I've made in re other networks. ---Rsk
Re: Abuse procedures... Reality Checks
On Wed, Apr 11, 2007 at 03:44:01PM -0400, Warren Kumari wrote: The same thing happens with things like abuse -- it is easy to deal with abuse on a small scale. It is somewhat harder on a medium scale and harder still on a large scale -- the progression from small to medium to large is close to linear. First, I don't buy this. I think dealing with abuse is *much* easier for large operations than small. But suppose you're right. Let me concede that point for the purpose of making my second point (and generic you throughout, BTW): Second, I don't really care how hard it is. It's YOUR network, YOU built it, YOU plugged it into our Internet: therefore, however hard it is, it's YOUR problem. Fix it. Or if you choose not to: at least stop whining about how much you don't like the way in which other people try to partially compensate for YOUR failure. ---Rsk
Re: Abuse procedures... Reality Checks
On Sat, Apr 07, 2007 at 09:50:34PM +, Fergie wrote: I would have to respectfully disagree with you. When network operators do due diligence and SWIP their sub-allocations, they (the sub-allocations) should be authoritative in regards to things like RBLs. After thinking it over: I partly-to-mostly agree. In principal, yes. In practice, however, [some] negligent network operators have built such long and pervasive track records of large-scale abuse that their allocations can be classified into two categories: 1. Those that have emitted lots of abuse. 2. Those that are going to emit lots of abuse. In such cases, I'm not inclined to wait for (2) to become reality. ---Rsk
Re: Abuse procedures... Reality Checks
On Sat, Apr 07, 2007 at 04:20:59PM -0500, Frank Bulk wrote: Define network operator: the AS holder for that space or the operator of that smaller-than-slash-24 sub-block? If the problem consistently comes from /29 why not just leave the block in and be done with it? Because experience...long, bitter experience...strongly indicates that what happens today often merely presages what will happen tomorrow. Because I haven't got unlimited time. Or money. Or resources. Because I haven't got unlimited WHOIS queries. (Although I and everyone else *should* have those. There are no valid reasons to rate-limit any form of WHOIS query.) Because there are way, WAY too many incompetently-managed networks whose operators can often be heard complaining about the abuse inbound to them at the same time they fail to take rudimentary measures to control the abuse outbound from them. cough port 25 blocking cough Because I was more patient for the first decade or two, and it proved to be a losing strategy. Because This Is Not My Problem. If by chance someone benign has chosen to locate their operation in known-hostile, known-negligently-operated network space, then their failure to perform due diligence may have consequences for them. I guess this begs the question: Is it best to block with a /32, /24, or some other range? Sounds a lot like throwing something against the wall and seeing what sticks. Or vigilantism. 1. Gratuitously labeling carefully-considered measures as random is not a route to productive conversation. 2. It is hardly vigilantism to take passive measures to protect one's network/systems/users from hostile activity. Doubly so when those measures consist merely of a refusal to grant a *privilege* after it's been repeatedly, systemically abused. ---Rsk
Re: Abuse procedures... Reality Checks
On Sat, Apr 07, 2007 at 02:31:25PM -0500, Frank Bulk wrote: I understand your frustration and appreciate your efforts to contact the sources of abuse, but why indiscriminately block a larger range of IPs than what is necessary? 1. There's nothing indiscriminate about it. I often block /24's and larger because I'm holding the *network* operators responsible for what comes out of their operation. If they can't hold the outbound abuse down to a minimum, then I guess I'll have to make up for their negligence on my end. I don't care why it happens -- they should have thought through all this BEFORE plugging themselves in and planned accordingly. (Never build something you can't control.) Neither I nor J. Oquendo nor anyone else are required to spend our time, our money, and our resources figuring out which parts of X's network can be trusted and which can't. It is entirely X's responsibility to make sure that its _entire_ network can be permitted the privilege of access to ours. And (while I don't wish to speak for anyone else), I think we're prepared to live with a certain amount of low-level, transient, isolated noise. We are not prepared to live with persistent, systemic attacks that are not dealt with even *after* complaints are filed. (Which shouldn't be necessary anyway: if we can see inbound hostile traffic to our networks, surely X can see it outbound from theirs. Unless X is too stupid, cheap or lazy to look. Packets do not just fall out of the sky, y'know?) 2. necessary is a relative term. Example: I observed spam/spam attempts from 3,599 hosts on pldt's network during January alone. I've blocked everything they have, because I find it *necessary* to not wait for the other N hosts on their network to pull the same stunt. I've found it *necessary* to take many other similar measures as well because my time, money and resources are limited quantities, so I must expend them frugally while still protecting the operation from overty hostile networks. That requires pro-active measures and it requires ones that have been proven to be effective. If X, for some value of X, is unhappy about this, then X should have thought of that before permitting large amounts of abuse to escape its operation over an extended period of time. Had X done its job to a baseline level of professionalism, then this issue would not have arisen, and we'd all be better off for it. So. If you (generic you) can't keep your network from being a persistent and systemic abuse source, then unplug it. Now. If on other hand, you decide to stick around anyway while letting the crap flow: no whining when other people find it necessary to take steps to defend themselves from your incompetence. ---Rsk
Re: Possibly OT, definately humor. rDNS is to policy set by federal law.
On Thu, Mar 15, 2007 at 07:41:58PM -0700, S. Ryan wrote: However, while it's not really above me to do the same, he could have removed the email address so spammers aren't adding to that guys list of problems. Anti-spam strategies based on concealment and/or obfuscation of addresses are no longer viable. (For a variety of reasons, including harvesting from public sources, harvesting from private sources such as compromised systems, and the deployment of abusive, spam-supporting tactics such as callbacks/sender address verification.) Yes, I know there are counter-examples, I have my own collection of them. But they're exceptions, not the rule. ---Rsk
Re: Counting tells you if you are making progress
On Wed, Feb 21, 2007 at 12:31:30AM -0500, Sean Donelan wrote: Counting IP addresses tends to greatly overestimate and underestimate the problem of compromised machines. It tends to overestimate the problem in networks with large dynamic pools of IP addresses as a few compromised machines re-appear across multiple IP addresses. It tends to underestimate the problem in networks with small NAT pools with multiple machines sharing a few IP addresses. Differences between networks may reflect different address pool management algorithms rather than different infection rates. Yes, but (I think) we already knew that. If the goal is to provide a minimum estimate, then we can ignore everything that might cause an underestimate (such as NAT). In order to avoid an overestimate, multiple techniques can be used. For example, observation from multiple points over a period of time much shorter than the average IP address lease time for dynamic pools, use of rDNS to identify static pools, use of rDNS to identify separate dynamic pools (e.g., a system which appears today inside hsd1.oh.comcast.net is highly unlike to show up tomorrow inside hsd1.nj.comcast.net), classification by OS type (which, BTW, is one way to detect multiple systems behind NAT), and so on. I think Gadi makes a good point: in one sense, the number doesn't really matter, because sufficiently clueful attackers can already lay their hands on enough to mount attacks worth paying attention to. On the other hand, I still think that it might be worth knowing, because I think the fix (or probably more accurately fixes) (and this is optimistically assuming such exist) may well be very different if we have 50M than if we have 300M on our hands. ---Rsk
Re: botnets: web servers, end-systems and Vint Cerf [LONG, sorry]
On Mon, Feb 19, 2007 at 02:04:13PM +, Simon Waters wrote: I simply don't believe the higher figures bandied about in the discussion for compromised hosts. Certainly Microsoft's malware team report a high level of trojans around, but they include things like the Jar files downloaded onto many PCs, that attempt to exploit a vulnerability that most people patched several years ago. Simply identifying your computer downloaded (as designed), but didn't run (because it was malformed), malware, isn't an infection, or of especial interest (other than indicating something about the frequency with which webservers attempt to deliver malware). I don't understand why you don't believe those numbers. The estimates that people are making are based on externally-observed known-hostile behavior by the systems in question: they're sending spam, performing SSH attacks, participating in botnets, controlling botnets, hosting spamvertised web sites, handling phisher DNS, etc. They're not based on things like mere downloads or similar. As Joe St. Sauver pointed out to me, a million compromised systems a day is quite reasonable, actually (you can track it by rsync'ing copies of the CBL and cummulating the dotted quads over time). So I'm genuinely baffled. I'd like someone to explain to me why this seems implausible. BTW #1: I'm not asserting that my little January experiment is the basis for such an estimate. It's not. It wasn't intended to be, otherwise I would have used a very different methodology. BTW #2: All of this leaves open an important and likely-unanswerable question: how many systems are compromised but not as yet manifesting any external sign of it? Certainly any competent adversary would hold a considerable fraction of its forces in reserve. (If it were me, that fraction would be at least the majority.) ---Rsk
Re: botnets: web servers, end-systems and Vint Cerf [LONG, sorry]
I really don't want to get into an OS debate here, but this does have major operational impact, so I will anyway but will be as brief as possible. Please see second (whitespace-separated) section for some sample hijacked system statistics which may or may not reflect overall network population. On Fri, Feb 16, 2007 at 04:27:55PM -, [EMAIL PROTECTED] wrote: I disagree. [...] Therefore, I assert that securing systems adequately for use on the Internet is indeed a SOLVED PROBLEM in computing. However, it isn't yet solved in a social or business sense. I think I understand your point about the social and business sense of the problem; if so, then we're probably in at least rough agreement on that. People do stupid things with computers (like reading email with a web browser, or replying to spam) and it's proven to be very difficult to convince them to stop doing those things. I'm reminded of Ranum's point (from http://www.ranum.com/security/computer_security/editorials/dumb/ ) about how if user education was going to work...it would have worked by now. I think the ongoing success of phishing operations, including those run by illiterate amateurs, in face of massive publicity via nearly every communications channel society has to offer, illustrates it nicely. But, and this may be where we disagree, it's not solved where Microsoft operating systems are concerned -- and I don't accept the notion that just putting such systems behind a firewall/NAT box is adequate. (I'll also argue that any OS which *requires* an external firewall to survive more than a few minutes' exposure is unsuitable for use on the Internet. *Not good enough*.) But suppose you put such a firewall in place. You'll need to configure the firewall properly -- paying as much attention to outbound rules as inbound. (And how many people ever do that? Even on corporate networks, there are still people stunningly incompetent enough to use default-permit policies on outbound traffic. And controlling outbound traffic from these systems is arguably more important than controlling inbound -- inbound likely only abuses the owner, outbound abuses the entire Internet.) You'll need to add anti-virus software. And anti-spyware software. Then you need to make sure the signature databases for both of those are updated early and often, keeping in mind that you have now elected to play a game that you will inevitably lose the first time that new malware propagates faster than the keepers of those databases can develop and distribute signatures. Vegas lives for suckers like this. And you'll need to de-install IE and Outlook, since everything else you've done will be defeated as soon as the next IE/Outlook-remotely-exploitable-and-leading-directly-to- full-system-compromise-here's-a-working-demo is published on full-disclosure, which should be, oh, about three hours from now. And this is before we even get to the licensing and DRM backdoors *designed into* Vista. Something which requires this much work just to make it through its first day online, while being used by J. Random Person, is hopelessly inadequate. Which is why systems like this are routinely compromised in huge numbers. Which is why we have a large-scale problem on our hands. Which brings me to the second point, and that is skepticism over the 100M ballpark figure that's been bandied about. Personally, I wouldn't even blink if someone produced convincing proof that the real number was 300M. I think that's completely plausible -- plausible but still, I very much hope, unrealistically high. So from my point of view, this 100M stuff is old news -- i.e., I'm telling you the ocean is wet. A tiny example: some data (summarized below) from a small experiment last month using a single test mail server. I threw away all the data blocked outright by the firewall in front of it. I threw away all data that didn't involve connections directed at port 25. I threw away all the data for connecting hosts without rDNS. I threw away all the data for connecting hosts with rDNS that looked even vaguely server-like. I threw away repeat visits. All of which means that my sampling method is akin to waving a thimble in a hurricane and will thus provide a gross (and likely skewed) underestimate. This left me with 1.5M observed hosts seen in a month. They're all sending spam. (How do I know? Because 100% of the mail traffic sent to that server is spam.) And they're all running Windows, except for a handful which aren't or which were indeterminate. Note that rDNS lookups were from local long-lived cache, so rDNS may be well out-of-date in some cases. Some random examples: 41.241.32.87dsl-241-32-87.telkomadsl.co.za 89.28.3.133 89-28-3-133.starnet.md 190.49.152.243 190-49-152-243.speedy.com.ar 218.178.50.40 softbank218178050040.bbtec.net 200.171.123.83 200-171-123-83.dsl.telesp.net.br 74.132.179.31
Re: Every incident is an opportunity (was Re: Hackers hit key Internet traffic computers)
My two (and a half) cents. 1. Systems that need a firewall, antivirus and antispyware software added on to survive for more than a few minutes SHOULD NOT BE CONNECTED TO THE INTERNET IN THE FIRST PLACE. They're simply not good enough. It's like bringing a knife to a gunfight. (nod to Mr. Connery) 2. The idea that you can run a program on a known-compromised OS and count on that program to detect and/or remove the problem is fundamentally flawed. The only way to have much confidence in the former is to boot from a known-UNcompromised OS and run it from there; the only way to have some confidence in the latter is to wipe the drives and start over. And there are still ways that both of these can fail (e.g., sufficiently clever malware which hides from the first and manages to survive the second by concealing itself in restored data). Hitting the scan and disinfect button or whatever they call it this week is well on its way to becoming a NOOP. 3. Banks, credit card companies, and numerous online merchants have trained their users to be excellent phish victims by training them to read their mail with a web browser. Anyone who is serious about stopping phishing will stop sending mail marked up with HTML. 4. Network operators need to be far more proactive about keeping Bad Stuff from *leaving* their networks. (After all, if it can be be detected inbound to X's network, then in most cases it can be detected outbound from Y's -- the exceptions being things like slow, highly distributed attacks which originate nowhere and everywhere.) 5. I have no sympathy for anyone who still uses the IE and/or Outlook malware-and-exploit-propagation-engines-disguised-as-applications. Not that the alternatives are panaceas -- of course they're not -- but at least they're a big step away from two of the primary compromise vectors. I figure little, if anything, substantive will be done about 1-4, but I have some hope that 5 is simple enough that sufficient repetition will eventually have some effect. ---Rsk
Re: an RBL for virus alert senders?
On Sat, Feb 10, 2007 at 10:02:45AM -0500, Mark Jeftovic wrote: Is there an RBL for mail servers run by brain dead postmasters that insist on running anti-viral software that sends out less-than-useless virus alerts, virus in your email, banned attachment spewage to the guaranteed-to-be-forged From address in the message headers? A number of DNSBLs now includes zones for outscatter (aka backscatter) senders running either broken anti-virus s/w, broken anti-spam s/w, broken mailers, or broken appliances. See: http://enemieslist.com/news/archives/2006/05/a_useful_collec.html for a useful collection of links. For DNSBLs, I believe you may wish to look at: http://tqmcube.com/weight.php http://www.au.sorbs.net/using.shtml http://www.uceprotect.net/en/index.php?m=3s=0 (Keep in mind that the last was written by folks whose native tongue is German, so cut them some slack on spelling/grammar errors.) I think (but am not sure) that Spamcop also lists outscatter senders. ---Rsk
Re: Anyone with SMTP clue at Verizon Wireless / Vtext?
On Wed, Feb 07, 2007 at 06:25:41PM -0800, Mike Lyon wrote: Their gateway is blocking mail from my host. Of course, there is no clueful contact info on their webpage... I know you asked for off-list, but since this (mail to Verizon refused) is a recurring problem, I'm sending this on-list as well. Anyone who has trouble sending mail to Verizon should check their own *incoming* mail logs for connections coming from systems in 206.46.0.0/16 (GTEN-206-46), most likely 206.46.252.0/24, whose names look something like: 206.46.252.147 sv114pub.verizon.net 206.46.252.148 sv124pub.verizon.net 206.46.252.149 sv134pub.verizon.net If you're refusing those connections or blocking mail RCPT TO attempts from them, then Verizon will probably refuse your outbound SMTP traffic to them. This may not be the problem you're facing; or it may not be the only problem you're facing. But it's easy enough to check and rule out if that's the case. ---Rsk p.s. *Why* is this happening? Because Verizon has deployed a very ill-considered anti-spam technique (callbacks AKA sender address verification) that serves three primary functions: first, it forcibly shifts the costs of Verizon's spam control onto third parties; second, it provides a spam support service; and third, it provides a free, anonymizing, scalable DDoS service.
Re: what the heck do i do now?
On Wed, Jan 31, 2007 at 07:04:37PM -0800, Matthew Kaufman wrote: (As an example, consider what happens *to you* if a hospital stops getting emailed results back from their outside laboratory service because their email firewall is checking your server, and someone dies as a result of the delay) A hospital which relies on email for laboratory results is obviously negligent. They should know that email is best-effort, no better, and that as a result it's an unreliable transport medium. (And increasingly so given the massive abuse being heaped on it as well as any number of ill-conceived anti-abuse ideas (C/R, callbacks) that actually make the problem worse.) Using it for life-critical data is foolish. There are much better choices available (including offline ones such as FedEx) for the transfer for critical information. ---Rsk
Re: what the heck do i do now?
We've told people for years that when they choose to use a DNSBL or RHSBL that they need to (a) subscribe to the relevant mailing list, if it has one and/or (b) periodically revisit the relevant web site, if it has one, so that they can keep themselves informed about any changes in its status or policies and/or (c) pay attention to what their own logs are telling them. They have not listened, for many values of they. Maybe it's necessary to speak to them in a language they understand, despite the large downside of doing so. As someone who has had his own lapses into denseness, I can certainly understand that this isn't pleasant, but on the other hand, the lessons I've learned that way have been sufficiently clear that I've never made those particular mistakes again. I would argue that among the lessons here are do not hardwire any DNSBL/RHSBL into any piece of software do not blithely use any such piece of software and assume it'll work and if you choose to use a DNSBL/RHSBL, then pay attention. chuckle Perhaps you should list (in the zone) all IP addresses which are repeatedly querying the zone -- after announcing this policy, of course. ;-) More seriously, I'll see what I can do to pass the word along in the faint hope that this will have some effect. ---Rsk
Re: Google wants to be your Internet
holy kook bait. it's amazing after all these years, and companies, how many people, and companies, still don't get it. /rf
Re: Phishing and BGP Blackholing
On Wed, Jan 03, 2007 at 05:44:28PM +1300, Mark Foster wrote: So why the big deal? Because it's very rude -- like top-posting, or full-quoting, or sending email marked up with HTML. Because it's an unprovoked threat. Because it's an attempt to unilaterally shove an unenforceable contract down the throats of everyone reading it. Because it's a tip-off that the sender does not value the time or resources of recipients. Because it's insulting. Because (borrowing from first link below) it's simply too stupid for words. Please see: Mailing and Posting Etiquette: Don't Send Bogus Legalistic Boilerplate http://www.river.com/users/share/etiquette/#legalistic Stupid Email Disclaimers http://www.goldmark.org/jeff/stupid-disclaimers/ Stupid E-mail Disclaimers and the Stupid Users that Use Them http://attrition.org/security/rants/z/disclaimers.html for longer (and much better) explanations. For a much long explanation of these and related points, see: Miss Mailers Answers Your Questions on Mailing Lists http://www.faqs.org/faqs/mail/miss-mailers/ ---Rsk
Re: Best Email Time
On Fri, Dec 08, 2006 at 07:50:57AM -0500, David Hester wrote: CNN recently reported that 90% of all email on the internet is spam. http://www.cnn.com/2006/WORLD/europe/11/27/uk.spam.reut/index.html CNN is behind the times. We passed 90% junk (spam, viruses, bogus virus warnings, worms, outscatter spam, C/R spam, etc.) a few years ago. Locally, over the last three months, we've been rejecting 98% of incoming traffic with just two reported problems from internal and external users. And almost all of that rejected traffic TCP-fingerprints as originating from hosts running Windows. ---Rsk
Re: register.com down sev0?
On Thu, Oct 26, 2006 at 12:14:43AM -0400, [EMAIL PROTECTED] wrote: On 26 Oct 2006, Paul Vixie wrote: i wonder if that's due to the spam they've been sending out? Paul, this isn't nanae. Let's not sling accusations like that wildly. There's nothing wild about it -- Paul is one of the most sober, reasoned observers of the spam problem, and if he told me that my servers were sending spam, then I'd darn well go investigate. Right now. Besides -- it's not like this isn't common knowledge in the anti-spam world. I'm sure I'm not the only one who's had unsatisfying correspondence with register.com wherein they refuse to lift a finger to stop the abuse from/facilitated by their operation. ---Rsk
Re: Boeing's Connexion announcement
and you will NEVER see this service again until there is a monetary incenctive to offer said service. So.. why is this still a discussion? On Sun, 15 Oct 2006, Owen DeLong wrote: This may be a nit, but, you will _NEVER_ see AC power at any, let alone all of the seats. Seat power that works with the iGo system is DC and is not conventional 110 AC. Owen On Oct 15, 2006, at 3:39 AM, Mikael Abrahamsson wrote: On Sun, 15 Oct 2006, Patrick W. Gilmore wrote: e-mail from the plane. :) Lack of seat power was not an issue, I just had two batteries. And this was BOS - MUC, which ain't a short flight. It's quite likely that on a grander scale of things, it's better economy that the few people who want to use their laptop the whole flight, do get two batteries, than doing the investment of putting AC power in all seats. Otoh, more batteries on planes increases the risk of fire due to exploding batteries happening in the plane :P -- Mikael Abrahamssonemail: [EMAIL PROTECTED] /rf
Re: SORBS Contact
On Wed, Aug 09, 2006 at 10:29:52PM -0500, Robert J. Hantson wrote: So with all this talk of Blacklists... does anyone have any suggestions that would be helpful to curb the onslaught of email, without being an adminidictator? Yes. First, run a quality MTA -- that *requires* an open-source MTA that is subject to ongoing, frequent, and strenuous peer review. I recommend one of {postfix, sendmail, exim, courier}. I recommend against qmail. Second, use the built-in capabilities of that MTA to block SMTP traffic from misbehaving mail servers. Examples: (1) Use the greet_pause (sendmail) or equivalent feature. (2) enable checks for forward and reverse DNS existence. (3) enable checks for HELO/EHLO (only to see if it's a FQDN, not to see if it matches connecting host). (4) use postgrey (or equivalent) with whitelisting of hosts that are known to you. And so on -- each MTA has a myriad of features that boil down to reject mail from misbehaving hosts and those features can be used to reject an awful lot of spam. (Yes, these measures will also occasionally reject mail from hosts which are either running highly broken software or which are badly misconfigured. This is a feature, not a bug, and the onus is on the operators of those hosts to bring them into compliance with Internet standards, both codified and de facto.) Third, Put in the Spamhaus DROP list on your border routers/firewalls. There is no reason to accept ANY network traffic, nor send any network traffic to, any network on that list. Nothing good can come of it -- for you, that is. Update once a month. Fourth, use a judicious selection of DNSBLs/RHSBLs (to do outright rejection). I use and recommend: Spamhaus XBL (which is the XBL+CBL combined zone). NJABL DSBL TQMcube zone: dhcp SORBS zones: http, socks, misc, smtp, web, zombie, dul AHBL I've never had a FP from the first three over many years of use. I've had a handful of scattered FPs from the second three, but each has been quickly addressed by the zone's maintainers -- and about half of those weren't their fault anyway, but they still fixed the problem. Fifth, if you don't need to accept mail from certain countries: don't. Many people (including me) refuse all mail from Korean and Chinese IP space because *at their site* it's 100.00% spam. TQMcube provides DNSBls for that, as do others. (Conversely, if you happen to be in either of those countries, you may find that 100.00% of your incoming traffic from the US is spam...in which case you should consider blocking all US IP space.) Sixth, consider a combination of AV/AS measures. One such combination might be ClamAV and SpamAssassin; another might use those two glued together with Amavis-new. But: it's not worth doing this until you've done all the other stuff, because otherwise you will burden these (relatively) computationally-intensive programs with traffic that you could -- and should -- have already rejected near the beginning of the SMTP transaction. If you use SpamAssassin, you can also use various DNSBLs as part of weighted scoring. This is a fallback if you're not comfortable using them to do outright rejection. Seventh, do not use SMTP callbacks -- they are abusive and readily lend themselves to DDoS attacks. They're also pointless and stupid. Don't bother using DomainKeys/SPF/whatever -- these technologies were failures from the beginning despite grandiose promises (Spam as a technical problem is solved by SPF). And do everything possible to make sure you don't emit outscatter (aka backscatter): reject during the SMTP conversation, don't accept-then-bounce. Eighth, get on the mailing lists that discuss this, like Spam-L, spam-research, spam-tools, spambayes, etc. NANOG really isn't the best place for this conversation. Finally, and perhaps most importantly: don't be a source of spam or a supporter of it (by providing HTTP, DNS or other services to spammers). Make sure you have a working, unblocked abuse address, read it, and act on what you receive there promptly - by immediately and permanently revoking all services that you're providing to spammers. Make sure that you have a TOS/AUP in place that allows you to shut them down without prior notice -- i.e. the only warning they get is the one in the TOS/AUP when they sign it. Add a clause that allows you to confiscate their data/equipment -- this will deter a *lot* of spammers from even trying to sign up with you, which in turn will greatly diminish the risk to your network and the amount of work you may have to do later. (The only reason any network has persistent/systemic issues with spam (as opposed to sporadic/isolated issues, which can happen to anyone) is that its operators are (1) lazy (2) stupid (3) incompetent (4) greedy. There are no exceptions. There are also no excuses.) ---Rsk
Re: SORBS Contact
On Wed, Aug 09, 2006 at 03:42:32PM -0600, Allan Poindexter wrote: Far more damage has been done to the functionality of email by antispam kookery than has ever been done by spammers. That is not even good enough to be wrong. ---Rsk, with apologies to Enrico Fermi
Re: Multi ISP DDOS
On Thu, May 04, 2006 at 08:21:04PM -0400, Martin Hannigan wrote: The killer here is that they asked a lot of people a year ago whether this was a good idea and everyone said no. Agreed. It's just the latest in the series of fiascos that we've seen when people try to respond to abuse with abuse. It doesn't work, it's not going to work, and the most likely outcome of any attempt to make it work will be yet another illustration of the law of unintended consequences. (e.g. Lycos' MakeLoveNotSPam) Not to mention that furnishing useful intelligence to the enemy (which BS does by design) is a poor strategy. ---Rsk
Re: SMTP store and forward requires DSN for integrity
I agree with nearly all of your analysis, but want to add a few small points of my own. On Sun, Dec 11, 2005 at 04:53:03AM -0600, Micheal Patterson wrote: Can BATV correct this? Possibly. After reading further and thinking about it: I believe the answer isn't possibly, but almost certainly not. Consider the ~100M zombies (hijacked Windows systems) out there. Mail traffic emitted by any malware resident on them is externally indistinguishable from mail traffic emitted by their former owners (operating under the misconception that it's still their computer). Nos suppose for a moment that we had Email Magic Bullet Technology (EMBT) which enabled us to trace any/all messages back to their origination point. And suppose that 100,000 sites out there (using some form of reliable malware detection) independently using EMBT conclude that they have received copies of the Microsoft Windows virus du jour from [EMAIL PROTECTED] at IP address 1.2.3.4. Thus all 100,000 sites are now in a position, if they wish, to emit a DSN addressed to [EMAIL PROTECTED] stating you have virus blah blah version blah blah etc. My first observation is that emitting these DSNs, even *with* EMBT, is a pointless exercise. Doing so increases, yet again, the volume of useless mail traffic traversing the Internet. It's just more self-promoting spam from AV vendors -- it's not like anyone actually _reads_ this stuff. And even if they did: who's going to read 100,000 messages? My second observation is that the combined volume of these DSNs may constitute a rather effective DDoS on example.com's MXs. My third observation is that such DDoS attacks could easily be redirected against third party mail servers by manipulation of MX records. 4. (I got tired of saying my Nth observation) I'm beginning to conclude that any technology which causes A, in response to traffic from B, to generate traffic to C, is probably not a good idea. 5. Keep in mind that malware resident on a hijacked system is in complete control of the box and thus has access to any cryptographic keys in use as well as any incoming mail being retrieved with POP/IMAP. So even if we presume the existence of a clueful and attentive user (then why is the box in a hijacked state?) there is no guarantee that the DSNs will actually be presented to the user. 6. How is a recipient of a DSN to know that it's authentic? After all, the fact that EMBT enables someone to generate a DSN in response to received virus-contaminated traffic doesn't prevent anyone else from generating a DSN in response to...nothing. Consider a piece of malware which generates such DSNs and its impact on an EMBT. 7. All of the problems cited above become much more interesting if the hijacked box isn't an end-user system, but a mail server. 8. (similar to observation 4) Adding more positive feedback loops to an Internet-wide mail system that already carries far too much junk traffic is a bad move. We need to dampen, not amplify. ---Rsk
Re: SMTP store and forward requires DSN for integrity (was Re:Clueless anti-virus )
On Fri, Dec 09, 2005 at 09:03:10AM -0800, Douglas Otis wrote: There is a solution you can implement now that gets rid of these tens of thousands of virus and abuse laden DSNs you see every day before the data phase. BATV is not a solution. It's a band-aid. It fails to address the underlying problem and instead focuses on merely trying to cope with one of the symptoms of that problem. And it also places the burden on the people who are NOT PART OF THE PROBLEM, and who therefore should not be the ones tasked with solving it. The solution isn't to try to figure out what to do with UBE generated by broken mail systems, broken anti-spam systems, broken anti-virus systems, and the like; the solution is to fix those systems so that they don't generate it. The best place to stop spam is as near its source as possible goes the maxim, and THE best place IS its source. This is not hard. It's been discussed at extraordinary length on spam-l, and one of the outcomes of those discussions is that while there are some edge cases that can be tough (depending on mail system architecture) those are a tiny minority, and the overwhelming majority can be dealt with quickly and easily. I would strongly suggest that anyone wishing to continue this discussion (a) read the archived discussion thoroughly and (b) take it up on spam-l, where it's probably more appropriate and where huge amounts of relevant clue exist among participants. ---Rsk
Re: Clueless anti-virus products/vendors (was Re: Sober)
On Wed, Dec 07, 2005 at 02:15:00PM -0800, Douglas Otis wrote: When auth fails, one knows *right then* c/o an SMTP reject. No bounce is necessary. This assumes all messages are rejected within the SMTP session. Yes, exactly and the point several of us have been making is that this is (a) easy (well, provided you're using a quality MTA; if not, then switch to one) (b) running a sane mail system (c) fast (d) resource-friendly and (e) most important of all, the _only_ way to avoid sending UBE in response to forgeries (which are not going away any time soon or quite possibly ever). (Please note: there are no exceptions to the UBE specification for DSNs. If DSNs are: - sent to forged senders (thus unsolicited) - in bulk (thus bulk) - via email (thus email) then they are UBE, which is THE definition of spam -- and which deliberately omits any mention of content, purpose or other things that are irrelevant to the spam/not-spam question.) ---Rsk
Re: Clueless anti-virus products/vendors (was Re: Sober)
On Sun, Dec 04, 2005 at 03:18:29PM -0800, Steve Sobol wrote: Blocking based on rDNS simply because it implies that a certain piece of equipment is at that address is... not advisable. Agreed. Those blocks aren't in place because there's a certain piece of equipment at those addresses (hostnames); they're in place because all of them have emitted spam. ---Rsk
Re: Clueless anti-virus products/vendors (was Re: Sober)
On Sun, Dec 04, 2005 at 09:27:58PM -0600, Church, Chuck wrote: What about all the viruses out there that don't forge addresses? Three responses. First, these are pretty much a minority nowadays: so unless someone wants to code AV responses on a case-by-case basis, the best default is don't respond, ever. Second, rejecting virus-contaminated traffic during the SMTP phase completely alleviates the need to address this question, since no outbound mail is generated. Third, put the first two points aside. Let's suppose, for a moment, that there existed a completely reliable mechanism for figuring out the real sender (in the sense of the owner of the infected system) for a particular virus-contaminated message. Think about what would happen if the 100 or 1000 or 1 or 10 people getting outbound viruses from that user all generated responses. The first effect would be to double the quantity of useless mail messages traversing the Internet. The second effect would be to hammer the user's mailbox and whatever mail server it happened to be residing on. (Consider how this effect would be multiplied if many users of X all had infected systems sending SMTP traffic directly, but of course were all receiving inbound mail via X's mail server(s).) The third effect would really be a non-effect, as the user's most likely response (thanks to years of conditioning imposed by the problem we're discussing here) would be to do nothing: experience has taught users that such warnings are bogus and can safely be ignored. The user's second-most-likely response would be indignant denial (despite logs showing positive identification). The user's third-most-likely response would be report the responses as spam and/or block the senders. Bottom line: nothing good can come of generating outbound mail in response to rejected inbound mail; the best course of action is to issue the appropriate 5XX response and be done with it. ---Rsk
Re: Clueless anti-virus products/vendors (was Re: Sober)
On Sun, Dec 04, 2005 at 09:58:20AM -0500, Todd Vierling wrote: If it is on by default, it is a bug, and not operator error. (In the case of the Barracuda) there are at least two such switches: one for spam, one for viruses. Note that when both are set to off that the box still occasionally emits such messages under as-yet-undetermined circumstances. I attempted to persuade one of Barracuda's engineers, months ago, that there was absolutely no valid reason for including a feature whose only purpose was abuse redirection. Incredibly, I was told the customers want this feature, and that it would not be removed. And thus we now have blacklist entries such as: barracuda1.aus.texas.net barracuda.yale-wrexham.ac.uk barracuda.morro-bay.ca.us barracuda.ci.mtnview.ca.us barracuda.elbert.k12.ga.us barracuda.fort-dodge.k12.ia.us barracuda.ci.garner.nc.us barracuda.ship.k12.pa.us and many, many more. Perhaps Barracuda should simply rename those switches as spam random individuals and/or get yourself blacklisted, as those are the only two things likely to result from turning them on. (Virus warnings to forged addresses are UBE, plain and simple.) When sent in bulk (as they inevitably are), absolutely. There's no exception in the canonical definition of spam (which _is_ UBE) for messages sent by broken anti-virus software, nor should there be. ---Rsk
Re: OT: Yahoo- apparently now an extension of the Chinese govt secret police....
On Wed, Sep 07, 2005 at 03:10:12PM +0100, [EMAIL PROTECTED] wrote: Every company has to obey the laws of the jurisdictions in which they do business, and for international companies, that list of jurisdictions can be very, very long. Obeying the (local) law is, in most cases, very reasonable. But when presented with *that* request from *that* government, the correct response -- from anyone with a conscience and a spine -- is go to hell. ---Rsk
Re: Yahoo! -- A Phisher-friendly hosting domain?
Two comments. soapbox First, it's everyone's responsibility to do what's necessary to prevent their operation from being an abuse source, vector, or support service. That includes registrars, web hosts, DNS providers, email services, consumer ISPs, webmail services, corporations, end-users -- *everyone*. Nobody gets a pass. Of course, this isn't what's happening: and that's why abuse is such a massive problem. If people actually (gasp!) began running their operations in a responsible manner (starting with very simple and easy measures like read your abuse mailbox and take immediate action on all reported problems) then all these issues would of course still exist -- but at greatly reduced levels. However, it seems that many prefer to implicitly support abuse by doing nothing...that is, until their network neighbors grow tired of their inaction, and decide to put a cork in it by collaboratively blacklisting them -- at which point, the typical response, instead of being a contrite admission of long-term systemic failure, is plaintive, mock-outraged whining about how terribly unfair it all is. /soapbox Second, it appears to me that Yahoo may be contending with Microsoft for the title of largest spam-and-abuse support operation on the Internet. Both are completely infested with abusers of all descriptions, not just in the freemail operations, but their mailing lists, web hosting, etc. Both have established very long track records of not just failing to take action, but *refusing* to take action, even when someone else does their job for them, compiles the applicable evidence, and presents it to them. (Search, for example, the Google archives of Usenet for either yahoo clueless or hotmail clueless for more examples than any sane person, or even Fergie ;-), would ever want to read.) Here's a recent note (courtesy of John Levine) which is complementary to the one previously presented concerning Yahoo: From: [EMAIL PROTECTED] (John R. Levine) Newsgroups: news.admin.net-abuse.email Subject: Re: Microsoft -- starting to support spam? Date: 24 Aug 2005 11:25:40 -0400 [...] The other day I collected a list of domains hosted by MSN. Here's a few. If you were in the domain hosting business, would you let your customers register and use these? Microsoft did. MY-EBAY-EBAY.COM MY-EBAY-SIGNIN-BILLING-ACCOUNT.COM MY-EBAYAUCTION.COM MYEBAY-EBAY.COM ONLINE-EBAY-ESCROW.COM ONLINEAUCTIONSONEBAY.COM ONLINESAFETY-EBAY.COM PAYMENT-CONFIRM-EBAY.COM PAYMENT-DEPARTAMENT-EBAY.COM PAYMENT-DEPARTMENT-EBAY.COM PAYMENT-EBAYALERT.COM PAYMENTS-EBAY-SQUARETRADE.COM PAYMENTSUPPORT-EBAY.COM PLANETEBAY-VERIFICATION.COM PLANETEBAYONLINE.COM PURCHASE-EBAYSQUARETRADE.COM REACTIVE-EBAY.COM SAFE-DEPARTAMENT-EBAY.COM SAFE-SQUARETRADE-EBAYDEALS.COM SAFEDEALS-EBAYSQUARETRADE.COM SAFEDEPARTAMENT-EBAY.COM SAFEHARBOR-EBAYCENTRAL.COM SAFETY-PROTECTION-EBAY.COM SAFETYTEAM-EBAY.COM SCGI-EBAY-EBAYISAPI-DLL.COM PAYPAL-ACCOUNT-8414SWQ9.COM PAYPAL-ACCOUNT-SA435QS.COM PAYPAL-ACCOUNTINGS.COM PAYPAL-ACCOUNTS-UPDATE.COM PAYPAL-ALERT.COM PAYPAL-CONFIRMATION-ID-0746795.COM PAYPAL-CONFIRMATION-ID-PP0746S795.COM PAYPAL-CONFIRMATION-ID-PP4145570.COM PAYPAL-FRAUD-ALERT.COM PAYPAL-INTL-SERVICE.COM PAYPAL-MEMBER-SERVICES.COM PAYPAL-SECURES-UPDATES.COM R's, John Keep this in mind when anyone from either Yahoo or Microsoft pretends to somehow be interested in anti-spam or anti-phishing activities. Neither has demonstrated, to date, the slightest inclination or ability to even keep its own operation relatively free of spammers, phishers, etc. despite having at its fingertips the cumulative work of a large number of netizens who have diligently reported these problems to them. It's thus completely disengenuous of them to feign any interest in doing so on an Internet-wide basis. ---Rsk
Re: Cisco crapaganda
[late followup] On Sat, Aug 13, 2005 at 07:32:20PM +0100, Dave Howe wrote: Rich Kulawiec wrote: More bluntly: the closed-source, faith-based approach to security doesn't cut it. The attacks we're confronting are being launched (in many cases) by people who *already have the source code*, and who thus enjoy an enormous advantage over the defenders. TBH though, usually the open source faith based approach to security doesn't cut it either. its easy to say its open source, therefore anyone can check the code but much harder to actually find someone who has taken the time to do it Ah, but I covered that, or at least I thought I did: D. Any piece of source code which hasn't been subjected to widespread peer review should be presumed untrustworthy-- because it not only hasn't been shown to be otherwise, the attempt hasn't even been made. (Note that the contrapositive isn't true -- peer review is only a necessary condition, not a sufficient one.) Which means: just because it's open source and therefore any can check it, doesn't mean that anyone has...or that they're competent...or that they were thorough...or that they found all the issues. Like I said, it's a necessary condition, not a sufficient one. But...even with all the tools that have been developed -- everything from formal proofs of correctness to array bounds checkers to stack overflow guards to you-name-it...it seems that in 2005 that the very best available/practical method we have for trying to produce secure code is lots and lots of independent and clueful eyeballs. I'm not saying that's a desirable situation, because it's not: it would be nice if we had something better. But we don't, at least not yet. Another way of putting it: no matter who you are, from one lone programmer to 10,000, the Internet is more thorough than you are. Now, one could counter-argue that keeping source code secret provides some measure of security. I'm not buying it: I don't think there's any such thing as secret source code. And even if there was: if someone with enough cash to fill a briefcase wants it: they WILL get it. I suppose what I'm saying is: let's drop the pretense that closed-source really and truly exists, let's get the critical code out in the open, and let's get started with the process of beating it into shape. Because we're already paying (and paying and paying) a huge price for continuing the charade. ---Rsk
Operational: Wiltel Peering with MCI problems around D.C
Anyone else (Wiltel customers especially) running into an operational issue around D.C. with partial connectivity It would seem MCI and Wiltel around D.C. have a 'informal' peering relationship and it's been errored right now for about 39 hours with a half-duplex route announcement. This has been effecting us with some loss of connectivity that's not there when we test same sites from other ISP clouds. Since it's informal, the help desk system at one or both ands may be having problem entering a ticket w/o an account number for the circuit. The usual channels are not producting results, and we're starting to get engineers on the lower end of the evoluationary food chain and finger pointing between wgc mci that's not helping. Tried a pch, haven't heard yet. ... 5 nycmny2wcx2-pos0-0-oc192.wcg.net (64.200.68.157) 5.786 ms 6.510 ms 6.114 ms 6 hrndva1wcx2-pos1-0-oc192.wcg.net (64.200.210.178) 12.029 ms 11.883 ms 11.582 ms 7 washdc5lcx1-pos5-0.wcg.net (64.200.240.194) 12.840 ms 12.559 ms 12.887 ms ...traffic dies
Re: Operational: Wiltel Peering with MCI problems around D.C (resolved)
This issue is resolved. Thanks to all who responsed on and off list. On Thu, 18 Aug 2005, Rich Emmings wrote: Anyone else (Wiltel customers especially) running into an operational issue around D.C. with partial connectivity
Re: Cisco crapaganda
On Tue, Aug 09, 2005 at 04:11:45PM +0100, [EMAIL PROTECTED] wrote: There really is no such thing as closed source. I've been saying this for years, and I'm sure you and I aren't the only ones. Corrallaries: A. If open publication of the full source code of XYZ would render it insecure, then XYZ is _already_ insecure. B. In analyzing any attack, it's prudent to presume that the attackers have the full source code of every piece of software involved. [1] C. It's not secure until everyone knows exactly how it works and it's still secure. D. Any piece of source code which hasn't been subjected to widespread peer review should be presumed untrustworthy-- because it not only hasn't been shown to be otherwise, the attempt hasn't even been made. (Note that the contrapositive isn't true -- peer review is only a necessary condition, not a sufficient one.) More bluntly: the closed-source, faith-based approach to security doesn't cut it. The attacks we're confronting are being launched (in many cases) by people who *already have the source code*, and who thus enjoy an enormous advantage over the defenders. It's time to level the playing field. It's time for all the vendors to publish ALL the source code so that we at least have the same information as our adversaries. Because relying on the supposed secrecy of source code is relying on a fantasy. ---Rsk [1] Either because it leaked (discarded computer equipment, backup tapes, etc.), was stolen from outside (network break-in, physical break-in), was stolen from inside (payoffs) or other means. Borrowing heavily from Bruce Schneier's analysis of what it'd be worth to buy an election: what's the dollar value on the open market of, oh, let's say, the full source code to one of Cisco's popular routers? Maybe $100K? $250K? Maybe more, considering what it might facilitate? Whatever that number is, that's the amount that prospective attackers may be presumed to be willing to spend to get it. And whether they spend it on RD, or paying someone who's already done the RD, or just cutting to the chase and paying off someone with access to it, doesn't really matter: if they're willing to spend to the money, they _will_ get it.
Re: Yahoo and Cisco to submit e-mail ID spec to IETF
On Mon, Jul 11, 2005 at 02:22:07PM +, Fergie (Paul Ferguson) wrote: Yahoo and Cisco Monday plan to announce they will submit their e-mail authentication specification, DomainKeys Identified Mail (DKIM), to the IETF to be considered as an industry standard. None of these have the slightest operational value. They are either (a) attempts to exert control over email (for profit, of course) or (b) PR exercises -- for instance, in Yahoo's case, to distract attention from the enormous amount of spam/spam support coming from or facilitated by Yahoo Stores and their freemail operation. See, for instance: Spammers Continue to be the Biggest (By Far) Supporters of Email Authentication http://www.techdirt.com/articles/20050711/1945259_F.shtml Oh, not that I expect the backers of these schemes to stop flogging them -- apparently they've managed, mostly by grandisose and bogus claims, to convince at least _some_ gullible people that they have the answer to spam. But they don't -- even if the perfect email auth method existed (and of course it doesn't) and was instantaneously and globally deployed tomorrow (ha!), the effect on SMTP spam would be a momentary hiccup, no more, and of course the effect on other forms of spam would be zero. ---Rsk
Re: OMB: IPv6 by June 2008
According to IANA, (http://www.iana.org/assignments/ipv4-address-space) MIT MERIT are the two .edu /8 holders on the list. Stanford turned their /8 in a while ago. Many? On Fri, 8 Jul 2005, Daniel Golding wrote: Rubbish. Many of the organizations that hold legacy /8s are Universities. If a .edu can pick up even a few million dollars from selling off a class A, they will. After all, they could simply sell chunks.
Re: E-Mail authentication fight looming: Microsoft pushing Sender ID
[late followup, sorry] On Thu, Jun 23, 2005 at 05:42:17AM -0700, Dave Crocker wrote: The real fight is to find ANY techniques that have long-term, global benefit in reducing spam. We've already got them -- we've always had them. What we lack is the guts to *use* them. As we've seen over and over again, the one and only technique that has ever worked (and that I think ever *will* work) is the boycott -- whether enforced via the use of DNSBLs or RHSBLs or local blacklists or firewalls or whatever mechanism. It works for a simple reason: it makes the spam problem the problem of the originator(s), not the recipient(s). It forces them to either fix their broken operation (any network which persisently emits or supports spam/abuse is broken) or find themselves running an intranet. We've known that this works for 20-odd years. It hasn't stopped working; what's stopped is the willingness to use it en masse, and to endure the consequences of thereof. And no new technology, however clever, is a substitute for the will to make this happen when necessary. I grow rather tired of people whining about the spam (and abuse) problem on the one hand...while refusing to take simple, well-known, and proven steps to push the consequences back on those responsible for it. While we may no longer be in a position to remove particularly egregious networks from the Internet, we most certainly are in a position to remove the Internet from them via coordinated group action -- producing an equivalent result. It's gonna come down to this sooner or later anyway. We might as well do it now, rather than waste another decade fiddling around with clever-but-useless technical proposals and worthless legislation while the problem continues to proliferate and diversify. ---Rsk
Re: E-Mail authentication fight looming: Microsoft pushing Sender ID
On Wed, Jun 22, 2005 at 06:39:07PM -0700, william(at)elan.net wrote: P.S. It would really be great if IETF remained true to its origin and goals did did technical reviews and selected proposals based on the technical capabilities and not on what large company is exerting pressure on them (especially not by means of press announcements). Yes, it would. It would also be great if the IETF realized that there is really very little need for email authentication: (a) forgery is a minor problem compared to spam, and even solving the forgery problem completely (which isn't gonna happen) would have a temporary and negligible effect on spam; (b) the authentication problem can't be solved anyway until the complete lack of security on hundreds of millions of network endpoints is solved; and (c) the originating IP address of any SMTP connection tells you _exactly_ who is responsible for that traffic, whatever it turns out to be. ---Rsk
Re: Email peering
On Fri, Jun 17, 2005 at 11:48:58AM -0400, Ben Hubbard wrote: You seem to repeatedly describe a solution that becomes so big that it (at least substantially) replaces 25/SMTP. That's what I don't think will work, or is needed. Please let me borrow Ben's point and expand on it. Spam as it's usually discussed (spam propagated via SMTP) is only part of the spam problem. We've seen Usenet spam, chat room spam, http referrer log spam, blog spam, and so on. And all of those bundled together and labeled as spam are only part of the overall network abuse problem -- which also involves phishing, zombies, DoS attacks, spyware, etc. And these are all (increasingly) interelated problems, e.g. spam is used to phish people to sites which forcibly download spyware, and so on. We could (and some already have) spend an enormous amount of time devising very clever solutions to these and deploying them. But as we've seen, doing so usually results only in a shift in the nature of the abuse, not an overall reduction in it. So even if we had The Perfect Solution to SMTP spam and it was globally deployed tomorrow and had no adverse side-effects...we'd buy ourselves a brief respite, no better. I'm not saying some of the technical approaches aren't clever. They are. But none of them are going to solve the problem for any acceptable value of solve, not because there's anything wrong with them per se, but because they're technological attempts to solve the problem at its end points -- rather than its source points. The best place to stop abuse is as near its source as possible. Meaning: it's far easier for network X to stop abuse from leaving its network than it is for 100,000 other networks to defend themselves from it. Especially since techniques for doing so (for instance, controlling outbound SMTP spam) are well-known, heavily documented, and easily put into service. The problem is that network X, for many values of X (see the data compiled by Spamhaus or SPEWS or any number of others) hasn't done so. Whether that failure is due to incompetence, greed, laziness, negligence or anything else is an interesting question...but really doesn't matter, because regardless of the cause, the fastest way to get it fixed is to make it X's problem...*not everyone else's*. (It's often impressive how fast X can move--despite protestations otherwise--when this situation is created.) Those who have been around a long long time know that this is how it used to be. If your network started spewing crap, and didn't stop spewing crap in a fairly timely manner, you got a phone call or email explaining that someone had their hand on your plug and was going to pull it. The point? The point is that there is no need for any new technology to deal with the spam/abuse probem. What there is a desperate need for is the *will* to use the technology we already have -- to shift the burden of dealing with abuse onto those who are permitting it to originate from their network. This can be done in a number of ways: using DNSBLs, firewalls, routers, whatever. Because if it's not done, then Network X, for many values of X, will be perfectly happy to watch everyone else innovate and scramble and spend money to defend themselves *as long as X doesn't have to*. As we've seen. For many years. Over and over and over again. After all, why should they? There's nothing in it for them and no downside if they don't. [...] if you give people the means to hurt you, and they do it, and you take no action except to continue giving them the means to hurt you, and they take no action except to keep hurting you, then one of the ways you can describe the situation is it isn't scaling well. --- Paul Vixie So either the collective we has the will to stop putting up with this nonsense -- or we don't. If it's the former, then we already have all the tools we need. If it's the latter, then nothing we come up with, no matter who clever it is, is going to make any real difference. ---Rsk
Re: VerizonWireless.com Mail Blacklists
On Tue, May 31, 2005 at 04:46:01PM -, John Levine wrote: VZW recently confirmed that their mail system is separate from VZ's, and whatever mistakes they may make, they're not VZ's. Okay, fine -- and a look at DNS seems to back this up (unless I'm missing something). And I've no desire to lay VZ's mistakes at VZW's feet, or vice versa -- but that still leaves whoever-is-affected (like the orginal poster or anyone else out there) to deal with the issues. And the lack of participation by VZ and VZW in the leading applicable forum (i.e. Spam-L) isn't helping. At least some of the other folks are engaged in dialogue with their peers, even if what they're saying isn't to everyone's liking. (As to Verizon itself, since three different people pointed out the relative lack of SBL listings: keep in mind that SBL listings are put in place for very specific reasons, and aren't the only indicator of spam. Other DNSBLs and RHSBLs, e.g. the CBL, use different criteria and thus provide different measurements (if you will) of spam. So, to give a sample data point, in the last week alone, there have been 315 spam attempts directed at *just this address* from 194 different IP addresses (list attached) that belong to VZ. Have I reported them? Of *course* not. What would be the point in that?) ---Rsk wbar1.chi1-4-10-118-158.chi1.dsl-verizon.net [4.10.118.158] hnllhi1-ar6-4-11-039-125.dsl-verizon.net [4.11.39.125] [EMAIL PROTECTED] [4.3.236.250] hnllhi1-ar3-4-3-111-154.hnllhi1.dsl-verizon.net [4.3.111.154] wbar2.wdc2-4.30.100.231.wdc2.dsl-verizon.net [4.30.100.231] wbar12.sea1-4.32.1.170.dsl-verizon.net [4.32.1.170] wbar12.sea1-4.32.2.144.dsl-verizon.net [4.32.2.144] atlnga1-ar2-4-34-191-127.atlnga1.dsl-verizon.net [4.34.191.127] chcgil2-ar7-4-34-128-080.chcgil2.dsl-verizon.net [4.34.128.80] wbar7.sea1-4-4-042-075.sea1.dsl-verizon.net [4.4.42.75] wbar8.sea1-4-4-065-255.sea1.dsl-verizon.net [4.4.65.255] wbar8.sea1-4-4-073-107.sea1.dsl-verizon.net [4.4.73.107] hnllhi1-ar3-4-42-103-001.hnllhi1.dsl-verizon.net [4.42.103.1] hnllhi1-ar3-4-43-152-035.hnllhi1.dsl-verizon.net [4.43.152.35] lsanca1-ar16-4-46-046-186.lsanca1.dsl-verizon.net [4.46.46.186] lsanca1-ar19-4-46-077-103.lsanca1.dsl-verizon.net [4.46.77.103] lsanca1-ar12-4-60-179-045.lsanca1.dsl-verizon.net [4.60.179.45] lsanca1-ar2-4-60-003-159.lsanca1.dsl-verizon.net [4.60.3.159] washdc3-ar8-4-62-076-106.washdc3.dsl-verizon.net [4.62.76.106] evrtwa1-ar5-4-65-000-098.evrtwa1.dsl-verizon.net [4.65.0.98] lsanca1-ar9-4-65-084-147.lsanca1.dsl-verizon.net [4.65.84.147] hnllhi1-ar7-4-7-214-201.hnllhi1.dsl-verizon.net [4.7.214.201] pool-64-222-182-239.man.east.verizon.net [64.222.182.239] pool-64-223-119-29.burl.east.verizon.net [64.223.119.29] pool-64-223-82-120.burl.east.verizon.net [64.223.82.120] pool-68-160-165-106.bos.east.verizon.net [68.160.165.106] pool-68-160-190-103.bos.east.verizon.net [68.160.190.103] pool-68-160-210-53.ny325.east.verizon.net [68.160.210.53] pool-68-161-112-23.ny325.east.verizon.net [68.161.112.23] pool-68-161-167-156.ny325.east.verizon.net [68.161.167.156] pool-68-161-42-244.ny325.east.verizon.net [68.161.42.244] pool-68-161-59-43.ny325.east.verizon.net [68.161.59.43] pool-68-162-13-70.nwrk.east.verizon.net [68.162.13.70] pool-68-162-145-6.pitt.east.verizon.net [68.162.145.6] static-68-162-251-148.bos.east.verizon.net [68.162.251.148] static-68-162-85-97.phil.east.verizon.net [68.162.85.97] pool-68-163-151-181.bos.east.verizon.net [68.163.151.181] pool-68-163-66-254.res.east.verizon.net [68.163.66.254] static-68-236-207-121.nwrk.east.verizon.net [68.236.207.121] pool-68-237-213-60.ny325.east.verizon.net [68.237.213.60] pool-68-238-16-251.rich.east.verizon.net [68.238.16.251] pool-68-239-58-165.bos.east.verizon.net [68.239.58.165] pool-70-104-104-209.chi.dsl-w.verizon.net [70.104.104.209] pool-70-104-119-185.chi.dsl-w.verizon.net [70.104.119.185] pool-70-105-12-148.rich.east.verizon.net [70.105.12.148] pool-70-105-207-214.scr.east.verizon.net [70.105.207.214] pool-70-106-208-58.chi.dsl-w.verizon.net [70.106.208.58] pool-70-107-198-95.ny325.east.verizon.net [70.107.198.95] static-70-107-239-188.ny325.east.verizon.net [70.107.239.188] pool-70-108-31-161.res.east.verizon.net [70.108.31.161] pool-70-109-107-92.alb.east.verizon.net [70.109.107.92] pool-70-110-186-17.phil.east.verizon.net [70.110.186.17] pool-70-16-121-96.scr.east.verizon.net [70.16.121.96] pool-70-16-137-90.phil.east.verizon.net [70.16.137.90] pool-70-17-10-94.balt.east.verizon.net [70.17.10.94] pool-70-17-197-159.balt.east.verizon.net [70.17.197.159] pool-70-17-75-173.res.east.verizon.net [70.17.75.173] pool-70-18-148-156.norf.east.verizon.net [70.18.148.156] pool-70-18-215-41.ny325.east.verizon.net [70.18.215.41] pool-70-19-255-63.bos.east.verizon.net [70.19.255.63] pool-70-20-192-78.phil.east.verizon.net [70.20.192.78] pool-70-20-241-84.phil.east.verizon.net [70.20.241.84] pool-70-20-45-51.man.east.verizon.net [70.20.45.51] pool-70-20-54-159.man.east.verizon.net
Re: VerizonWireless.com Mail Blacklists
On Thu, May 19, 2005 at 05:24:41PM -0700, Crist Clark wrote: It appears VerizonWireless.com has some rather aggressive mail filters. Verizon is hopelessly clueless when it comes to mail system operations and mail filters -- as evidenced by their ongoing decision to deliberately provide anonymizing spam support and DoS attack services to anyone clever enough to use them via their abusive callback system, and by their total failure to to address the torrent of spam emanating from their own network. Which is a roundabout way of saying that it's probably best to a find a way to work around whatever stupidity they're inflicting on you, as it's very unlikely that anyone at Verizon is capable of even comprehending the problem, let alone taking steps to correct it. ---Rsk
Re: IBM to offer service to bounce unwanted e-mail back to the
If FairUCE can't verify sender identity, then it goes into challenge-response mode, sending a challenge email to the sender, Let me rephrase that more accurately: ...spamming everyone who has been so unfortunate as to have their address forged into a mail message... Challenges thus issued are unsolicited: the challenged party had aboslutely nothing to do with the inbound mail message. If such a system is used in production, then challenges will, inevitably, be sent in bulk. I trust it's clear that these challenges are email. unsolicited bulk email, or UBE, is the canonical and only correct definition of [SMTP] spam. So not only does FairUCE ignore a fundamental principle of competent anti-spam defense (e.g. do not generate still more junk mail traffic at a time when we are drowning in junk mail traffic) it does so by generating outbound spam. How very nice. See, BTW, for some background info: http://www.techzoom.net/paper-mailbomb.asp which discusses similar issues. (Thanks to Bruce Gingery for pointing this out.) Beyond that, as Lycos Europe has already belatedly figured out, attempts to strike back at spammers which presume (as FairUCE naively does) that spammers themselves will not rapidly deploy effective countermeasures are doomed to fail and, in all probability, doomed to abuse innocent third parties. This is why responsible anti-spam techniques do not even *attempt* to fight abuse with abuse. I suggest further discussion be moved to Spam-L (a) before NANOG is overrun with it again and (b) because the most anti-spam experts and other interested parties may primarily be found there, not here -- and extensive discussion of this particular issue is already in progress anyway. ---Rsk
Re: Utah governor signs Net-porn bill
On Tue, Mar 22, 2005 at 03:49:44PM -0700, pashdown wrote: In the end the bill itself doesn't have a big impact on this ISP's business. We have used Dansguardian for many years now along with URLblacklist.com for our customers that request filtering. The fact that its lists and software are open for editing and inspection is the reason I chose this over other commercial methods. What is the plan -- if any -- to deal with the hosting of the porn sites on the computers of the people who they're supposed to be blocked from? What I'm referring to is the occasional spammer tactic of downloading web site contents into a hijacked Windows box (zombie) and then using either redirectors, or rapidly-updating DNS, or just plain old IP addresses in URIs to send HTTP traffic there. This seems to be a tactic of choice on those occasions when the content is of a dubious nature: kiddie porn, warez, credit card numbers, identity theft tools, that sort of thing. Even *detecting* such things is difficult, especially when they're transient in nature and hosted on boxes with dynamic IP addresses. So how is any ISP going to be able to block customer X from a web site that's on customer X's own system? Or on X's neighbor Y's system? Oh...and then we get into P2P distribution mechanisms. How is any ISP supposed to block content which is everywhere and nowhere? ---Rsk
Re: IBM to offer service to bounce unwanted e-mail back to the computers that sent them
On Tue, Mar 22, 2005 at 10:24:37AM -0800, Andreas Ott wrote: http://money.cnn.com/2005/03/22/technology/ibm_spam/ If this write-up is accurate, then this is incredibly stupid in multiple ways and on multiple levels. I *hope* that this is just a misperception based on poor writing and that nobody at IBM is actually seriously contemplating something that's simultaneously useless and abusive. ---Rsk
Re: sorbs.net
On Tue, Mar 15, 2005 at 11:21:35AM -0800, Randy Bush wrote: o could this be used as a dos and then become extortion? Unlikely. Blocklists are used by choice, and blocklists which either aren't effective or don't have sane policies don't get chosen often. (See BLARS, which even blars was recommending that you don't use the last time I checked.) So if someone tried this approach, the most likely outcome is that those using it would stop and the problem would evaporate. ---Rsk
Re: sorbs.net
On Tue, Mar 15, 2005 at 05:44:41PM -0500, Paul G wrote: unfortunately, that *still* didn't stop people from using it, which translated into an unresolvable headache for me as a sp. Then gripe at the people who chose to use it: it was *their* decision, and if it was a poor one, then they are the people who need to be held accountable for it. Look, if I want to publish a blocklist of all domains with the string er in them and all IP addresses ending in .7, that would be a silly thing to do: but after all, it's just a list. It doesn't _do_ anything until someone decides to use it for some purpose. And if they're insane enough to do so, well, shrug, so be it. It's their system/network; they're free to decline any inbound traffic they don't wish to receive. And you, and I, and everyone else who's not on their system/network, don't get a vote. ---Rsk
[off-list] Re: High volume WHOIS queries
On Tue, Mar 01, 2005 at 09:17:48AM -0500, Hannigan, Martin wrote: I don't know that this is the case, I suspect it's resource management. If the database is getting slaughtered by applications on uncontrolled auto pilot, it's unusable for the rest of us. Understood. So why not make it easy -- both for yourselves and for everyone else? Just publish all WHOIS data on static web pages -- not even marked up with HTML, just plain ASCII text -- whose URLs are easy to construct, a la www.verisign.com/foo/bar/blah/example1.com www.verisign.com/foo/bar/blah/example2.net and refresh them from backing store whenever the real data changes. (And yes, I realize I'm using an example based on domains, not networks, but I trust it's still applicable.) This makes the load on the servers about as small as it's going to get. (Heck, they could be served from a cut-down web server designed to serve static content only.) It also makes it trivially easy for people to look things up without worrying about rate-limiting. Heck, once the search engines indexed it, it'd be even easier. As to ...then the spammers will mass-harvest it...: they already HAVE. They're busy selling it to each other on CD/DVD and via other means. This has been going on for years, and however-they're-doing-it, they're doing it well enough to acquire recently-modified data. So that toothpaste is completely out of the tube and there's no way to put it back in. I don't think any substantive purpose is served by pretending/wishing that it's otherwise: there's a demand for this data, and plenty of money to be made by those who will supply it, therefore it's going to be acquired and sold. But the people who *can't* access the data -- not without taking measures to evade the rate-blocking that's in place -- are abuse victims who are trying to track down those responsible. So I view the problem of overload on WHOIS servers as self-inflicted damage, easily fixed by giving up the pretense that restricting access to the data has any real value for anyone. (Well, it *does* benefit those selling it, but I trust that ensuring their profits isn't a goal that anyone's particularly worried about. ;-) ) ---Rsk
Re: Internet Email Services Association ( wasRE: Why do so few mail providers support Port 587?)
[ This discussion should be moved to Spam-L. ] On Mon, Feb 28, 2005 at 10:35:53AM +, [EMAIL PROTECTED] wrote: You misunderstand me. I believe *LESS* red tape will mean better service. Today, an email operator has to deal with numerous blacklisting and spam-hunting groups, many of which act in secret and none of which have any accountability, either to email operators, email users or the public. Nonsense. Those groups are accountable to those who choose to avail themselves of their work. Mail system operators -- as they have already demonstrated by their actions -- will not use those resources which are run incompetently or which do not provide satisfactory results. And the wide range of resources available (there are probably about 500 DNSBLs at the moment) and the variety of policies by which they're run provides healthy competition as well as a selection of tools sufficient to allow just about any local policy to be implemented. There is no need for these operators of these resources (say, SPEWS) to be accountable to anyone else. Why should they be? They merely publish a list. If you don't like their list or the policies they use to build it: don't use it. But know that everyone else will make their choices according to their own needs, not yours. I'd like to see all of this inscrutable red tape swept aside with a single open and public organization that I have been calling the Internet Mail Services Association. This will mean less red tape, more transparency, and more accountability. It will also mean that anyone with deep enough pockets to buy their way in will get a pass to spam as much as they want. Sorry, but this experiment has already been run (see bonded spammer) and has been a miserable failure. Besides, there is no inscrutable red tape. Dealing with DNSBLs is quite easy. Of course, you may not get the results *you* wish to have, but if you're running or occupying a spammer-infested network, then the results *you* wish to have are unimportant. ---Rsk
Re: AOL scomp
On Fri, Feb 25, 2005 at 01:34:21AM -0600, Robert Bonomi wrote: Because the recipient *expressly* requested that all mail which would reach my inbox on your system be sent to me at AOL (or any other somewhere else). I have three somewhat-overlapping responses to that -- and I'll try to stay focused on operational issues, since this is NANOG, not Spam-L. (But if you to delve further into this, I would suggest shifting the discussion there, as it's probably more appropriate.) 1. SMTP spam is not mail. Oh, it may *look* like mail, it may arrive on the same port, and it may use the same protocol, but it's not mail. It's abuse. There's no reason to forward it to anybody. There's no reason to even accept it in the first place. Heck, there's no reason to even _emit_ it in the first place. Which (not emitting it) is what everyone should be trying to do, but few are. It seems to have somehow escaped the notice of many that spam/abuse doesn't fall out of the sky: it comes from systems. Those systems are on networks. Those networks are run by people. Those people are personally responsible for the spam/abuse that their networks emit. It's thus their responsibility to make it stop. But their failure to properly discharge that responsibility is why we have a major problem, or actually, several major problems, instead of a minor annoyance. [ Let's have a moment of nostalgia for the time when allowing this to happen day after day would not happened because the plug would have been unceremoniously pulled after the first 24 hours. It's illuminating how quickly unsolvable problems are at least patched to an acceptable degree when connectivity is at stake. ] 2. Mail delivery requires permission of all of: - the network operator - the system operator - the mail subsystem operator - the end user (who of course are sometimes all the same person/people). For instance, the end user may grant permission for someone to send 500M video clips attached to mail messages, but if the mail subsystem operator has limited mail message sizes to 10M, then permission is denied and the mail message is turned away. As another example, if the end user has granted permission for 5000 messages/second, but the network operator has capped bandwidth at a level below that required to transmit those messages, permission will be denied. What I'm trying to say is that merely having the permission of the end user to send something isn't enough: one also has to have permission from the authorities involved in providing the service, and their permission may be conditional on certain requirements enforced by automated agents, e.g., you will only be given permission if your message is = 10M or you will only be given permission if your message does not contain a live virus. Or you will only be given permission if your message isn't spam, or you will only be given permission if your message isn't coming from a domain/system/network known to emit prodigious quantities of spam. I see no reason for any of those four people to grant permission to receive or forward spam *except* for those very few conducting research in the area (similarly for viruses), and those people aren't going to want it via a forwarder anyway. So while the end user on some remote system may have in fact said send me everything, including the spam (although this seems very unlikely) this does not constitute permission to do so, because that user isn't the only party involved, and their permission alone is insufficient. (logical AND required, not logical OR) And I doubt very much that the others will give their consent. 3. Dealing properly with forwarded spam which is rejected by the destination is tough: generating bounces will make the generator a spammer-by-proxy, and that's obviously unacceptable. A much better course of action is to try to reject as much spam as possible -- rather than accepting it, trying to forward it, and then bouncing it (thereby spamming innocent third parties, and self-nominating for inclusion in various blacklists). Bottom line: deliberately forwarding spam makes you a spammer. Don't do it. If a user, for some bizarre reason, insists: don't do it. Tell them to find an irresponsible, spam-supporting ISP to do it for them -- there are certainly plenty of those around to choose from. This means that every such message from the 'forwarding' system to the destination system is, BY DEFINITON, solicited. The mailbox owner has expressly and explicictly requested those messages be sent to him at the receiving system. This is a definition of solicited which is wholly at odds with that in common practice for the last few decades. By your definition, the victim of a mailbombing attack would have somehow solicited that abuse merely because they have a forwarding alias on your system. I'm not having any. UBE (the proper definition of SMTP spam) doesn't magically become not-UBE just because it gets
Re: AOL scomp
On Thu, Feb 24, 2005 at 02:53:14PM -0500, Mark Radabaugh wrote: Now here I would disagree. These are specific requests by individuals to forward mail to from one of their own accounts to another one of their own accounts. But a request to forward mail is not a request to facilitate abuse by forwarding spam. I do not think AOL (or anyone) should consider mail forwarded at the customers request as indicating that our mail servers are sending spam. Why not? Did it come from your servers? On your network? If yes, then it's YOUR spam, and you should expect to held fully accountable for it. If that's an unpleasant notion, and I'll stipulate that it sure is for me, then you need to do whatever you need to do in order to put a sock in it. We are long past the time when excuses for relaying/forwarding/bouncing spam were acceptable. The techniques for mitigating these -- at least to cut down a torrent to a trickle -- are well-known, well-understood, well-documented and readily available in a variety of implementations. More generally, the best place to stop spam is as near its source as possible. So if you're the forwarder, you're at least one hop closer to the source than the place you're forwarding to -- thus you should have a better chance than they do of stopping it. And you should at least make a credible try: nobody expects perfection (though we certainly hope for it) but doing _nothing_ isn't acceptable, either. So, for instance: take advantage of the AOL feedback loop. Anything that they're catching -- that you're not -- indicates an area where you can improve what you're doing. Find it, figure it out, and do it. Everyone benefits -- including all your users who aren't having their mail forwarded. ---Rsk
Choicepoint [was: Re: Break-In At SAIC Risks ID Theft]
It gets worse: Database giant gives access to fake firms http://www.msnbc.msn.com/id/6969799/ ---Rsk
Re: Verizon wins MCI
On Mon, Feb 14, 2005 at 11:38:10PM -0500, Jon Lewis wrote: But does anyone really know how big WorldCon is/was? chuckle Well, by one metric, they're #1: RankISP Number of currently-listed spam issues --- --- -- 1 mci.com 193 2 kornet.net 164 3 sbc.com 119 4 comcast.net 100 5 xo.com 78 6 above.net 75 7 crc.net.cn 68 8 verizon.net 67 9 level3.net 64 10 interbusiness.it56 (from the Spamhaus top ten list (http://www.spamhaus.org/statistics.lasso)) Combining entries 1 and 8 puts them even further out in front. ---Rsk
Re: Verizon wins MCI
On Tue, Feb 15, 2005 at 06:56:54PM +, Christopher L. Morrow wrote: we aim to please? or was there some hidden meaning to your email/troll? 1. I didn't realize that accurately reporting the facts was now considered a troll. Fascinating. 2. Nope, there's no hidden meaning -- unless you're someone with the authority, integrity and courage to pull the plug on those 193 spam operations. By close-of-business today would be just fine, thanks. Those of us absorbing the operational costs of dealing with the abuse they're cranking out would really appreciate it. 3. If that made you uncomfortable, you probably don't want to read this: Should MCI Be Profiting From Knowingly Hosting Spam Gangs? http://www.spamhaus.org/news.lasso?article=158 ---Rsk
Re: Time to check the rate limits on your mail servers
On Thu, Feb 03, 2005 at 11:42:55AM +, [EMAIL PROTECTED] wrote: CNET reports http://news.com.com/Zombie+trick+expected+to+send+spam+sky-high/2100-7349_3-5560664.html?tag=cd.top that botnets are now routing their mail traffic through the local ISP's mail servers rather than trying their own port 25 connections. There is one mistatement in this article, though: the author says: This means the junk mail appears to come from the ISP [...] If it's coming from their servers (or their network), it IS coming from the ISP, and they bear full responsibility for making it stop. ---Rsk
Re: Time to check the rate limits on your mail servers
On Thu, Feb 03, 2005 at 09:21:19PM +0200, Petri Helenius wrote: Nils Ketelsen wrote: Only thing that puzzles me is, why it took spammers so long to go in this direction. It didn't. It took the media long to notice. Pete's correct. And there's another reason: spammers have long since demonstrated that they will adapt when necessary. Now that some ISPs have FINALLY, more than two years after they were warned that they needed block port 25 inbound/outbound ASAP on as much of their address space as possible in order to put a sock in this, done something...the spammers may have judged that it's become necessary. And please note: this is far, FAR from the last thing that they have in their bag of tricks. ---Rsk
Re: fixing insecure email infrastructure (was: Re: [eweek article] Window of anonym
On Thu, Jan 13, 2005 at 12:26:47PM +0100, Stephane Bortzmeyer wrote: 4) all domains with invalid whois data MUST be deactivated (not confiscated, just temporarily removed from the root dbs) immediately and their owners contacted. Because there is no data protection on many databases (such as .com registrars who are forced to sell the data if requested), people lie when registering, because it is the only tool they have to protect their privacy. Those people are fooling themselves. Much of the domain registration data is already being offered for sale (by spammers, of course) and no doubt, when it suits their purposes to do so, the same people will find a way to acquire the supposedly private data behind the rest. (How are they getting the data? I don't know. Could be weak registrar security, could be a backroom deal, could be a rogue employee. But there is demand for the data, and plenty of money to pay for it, therefore it *will* be acquired and sold.) The current pretense of privacy is nothing more than a convenient mechanism for registrars to pad their wallets and evade responsible for facilitating abuse. ---Rsk
Re: verizon.net and other email grief
Reply (*long* reply) being sent off-list. If anyone else wants to see it, rattle my cage. ---Rsk
Re: no whois info ?
I'm going to try to keep this short, hence it's incomplete/choppy. Maybe we should take it to off-list mail with those interested. On Sat, Dec 11, 2004 at 10:06:10PM -0700, Janet Sullivan wrote: Great! So, if you are a vulnerable minority, don't use the internet. I said precisely the opposite. This _in no way_ prevents anyone from doing things anonymously on the Internet: it just means that they can't control an operational resource, because that way lies madness. And anyone who *is* a vulnerable minority should avoid doing this (that is, deliberately exposing themselves by controlling an operational resource) at all costs, because it self-identifies and instantly compromises the very privacy they seek/need/want. This doesn't stop anybody from doing anything they want online -- *except* controlling those resources, which is, like I said earlier, is one of the very last things they should want to do if they're truly concerned about their privacy. And the other side of it is: I don't think an Internet with anonymous people controlling operational resources is workable. OK, how many anonymous domains (ala domainsbyproxy) have you been unable to contact? I *never* attempt to contact the owners of a domain which appears to be the source of abuse, anonymous or otherwise. It's a complete waste of time. I use the means at my disposal to ascertain whether it's really them (which, 99% of the time, is blindingly obvious) and then act accordingly. In the remaining 1% of the cases, where substantial doubt remains, I note it and await further developments. Sometimes those further developments include reports/claims of joe-jobs; sometimes they include clinching proof (either way) that eluded me; sometimes they're not forthcoming for a very long time. shrug So be it. But I learned long ago that (modulo some very rare cases) the only thing that can come out of contacting said domain owners is possible disclosure of the means by which the abuse was detected, and the fact that it _has_ been detected, and that's not a good thing. But, I get less spam, and MUCH less snail mail, with anonymous registrations. Today, perhaps. Do you really think it's going to stay that way? Surely you must know that eventually the spammers WILL get their hands on your private domain registration data, WILL use it to spam -- and oh-by-the-way will also make a tidy profit doing a side business in selling it to anyone with cash-in-hand? C'mon, these are people with bags of money to spend. Do you *really* think that the underpaid clerk at J. Random Registrar is going to turn down $50K in tax-free income in exchange for a freshly-burned CD? And of course, once the data's in the wild, it's not like those who are selling it will balk at providing it to customers who have serious axes to grind. Or if you want to believe in the fiction of 100% trustworthy registrars, what happens when one of their [key] systems is zombie'd? Or when somone figures out how to hijack one of the data feeds and snarf all the brand-new domain data as soon as it's created? There is a market for this data. Therefore it will be acquired and sold. And attempts to maintain the pretense that it's otherwise -- while no doubt inflating the profits of those peddling anonymous registration -- are disengenuous, and in the long run, potentially very damaging, with the extent of the damage perhaps proportional to the degree on which people rely on it. (More bluntly: some people are going to be burned very badly by this. And the subsequent inevitable litigation won't undo it.) I agree. But why should it matter if you know the name of the person controlling an operational resource if they are responsible net citizens? Maybe, but I think where we differ is that I strongly believe that responsibility (for operational resources) _requires_ public identification. [ Oh: please note: content is not an operational resource. F'instance, I have no problem, for instance, with someone running a blog anonymously. I have a serious problem with someone running a network anonymously. ] ---Rsk
Re: no whois info ?
I don't want to turn this into a domain policy discussion, but here are a few comments (in some semblance of order) which relate to the operational aspects. 1. Anyone controlling an operational resource (such as a domain) can't be anonymous. This _in no way_ prevents anyone from doing things anonymously on the Internet: it just means that they can't control an operational resource, because that way lies madness. 2. If someone wants to remain anonymous -- say, as in the example Janet cited, of sexual abuse victims -- then one of the very LAST things they should do is register a domain. Doing so creates a record (in the registrar's billing department if nowhere else) that clearly traces back to them. Further, an anonymously-registered domain isn't much good without services such as DNS and web hosting: and those, of course, represent still more potential information leaks. Anyone who thinks their anonymous registration is truly anonymous is in for a rude awakening: if the data isn't already in the wild, it will be as soon as the spammers find it useful to make it so. It's much better, if anonymity is the goal, not to begin by causing this data to exist. 3. Anonymous domain registration, like free email services, is an abuse magnet. [Almost] nobody offering either has yet demonstrated the ability to properly deal with the ensuing abuse: they've simply forced the costs of doing so onto the entire rest of the Internet. It's thus not surprising that a pretty good working hypothesis is to presume that any domain which either (a) has anonymous registration or (b) has contact addresses at freemail providers is owned by people intent on abusing the Internet. No, it's not always true, but as a first-cut approximation it works quite well. Doubly so if the domain is in a TLD known to be spammer-infested (e.g., .biz) and triply so if the domain name itself screams spam (e.g. cheap-phentermine-online.biz). [1] 4. Spammers have a myriad of ways of harvesting mail addresses that yield the same data but without requiring WHOIS output. For example, some of the malware they've released prowls through all the sent/received mail on infected systems...which means that if anyone using their brand-new anonymously-registered domain happens to send a single message to someone else -- who is already or subsequently infected -- then the address in question will shortly be in the wild, bought and sold and used by spammers. Note that some of the infected systems are mail servers, so even if the sender and recipient are secure from infection, the address in question may still be acquired. And no doubt some of them are inside registrars and DNS hosts and web hosts, just like they're [nearly] everywhere else. And this is just one way that addresses are harvested. 5. Spam is about far more than than merely SMTP these days. SPIM (IM spam) and SPIT (VOIP spam) and adware and all kinds of other things are being used -- and by _the same people_, e.g. Spamford, to do exactly the same thing: put content in front of eyeballs. Even if we could throw a switch and cut off all SMTP spam, the respite would only be temporary. So just trying to hide from SMTP spam, although it might provide the comfortable illusion of accomplishing something in the short term, is useless in the long term. 6. Spam is a problem for everyone, and so it's everyone's responsibility to fight it. Those who want the privilege of controlling operational resources must also accept the responsibility of doing their part. ---Rsk [1] To save you the trouble of looking it up: Domain Name: CHEAP-PHENTERMINE-ONLINE.BIZ Domain ID: D3193600-BIZ Sponsoring Registrar:DOTSTER Domain Status: ok Registrant ID: DOTS-1025016423 Registrant Name: N K Registrant Organization: Registrant Address1: - Registrant Address2: n/a Registrant City: - Registrant State/Province: - Registrant Postal Code: - Registrant Country: United States Registrant Country Code: US Registrant Phone Number: +1.311212 Registrant Facsimile Number: +1.311212 Registrant Email:[EMAIL PROTECTED] and so on. A 200-foot-high billboard would only be slightly more obvious.
Re: verizon.net and other email grief
On Fri, Dec 10, 2004 at 02:43:21PM +, Simon Waters wrote: The most obvious is none of the three UK ISPs I have ready access to can connect to port 25 on relay.verizon.net. (MX for all the verizon.net email addresses). We can ping it (I'm sure it isn't singular?), but we have no more luck delivering email than contacting verizon technical staff, logs suggests we are in day 3 of this. I'm now listening to hold music at International rates - ouch. I think I can shine a little bit of light on what might be your Verizon problem. Summary: Verizon has put in place an exceedingly stupid anti-spam system which does not work, which facilitates DoS attacks, and which provides active assistance to spammers. Verizon has been told all of this, and it's been discussed on Spam-L. If there's been a response from Verizon, I haven't seen it: and AFAIK the practice continues. Anyone trying to deliver mail there might want to at least skim this to get an idea of the issues they may bump into. Please note that in places this is sketchy because it seems impossible to get Verizon to provide the information necessary to make it otherwise (or correct any errors). Details: When an incoming SMTP connection is made to one of Verizon's MX's, they allow it to proceed until the putative sender is specified, i.e. they wait for this part of the SMTP transaction: MAIL From:[EMAIL PROTECTED] Then they pause the incoming connection. And then they start up an outbound SMTP connection from somewhere else on Verizon's network, back to one of the MX's for example.com. They then attempt to verify that blah is a valid, deliverable address there. Since most people have long since disabled SMTP VRFY, they actually construct a fake message and attempt delivery with RCPT. If delivery looks like it's going to succeed, they hang up this connection (which is rude), and un-pause the incoming one, and allow it to proceed. If delivery looks like it's going to fail, then they also hang up their outbound connection (still rude), un-pause the incoming one, and reject the traffic. This also means that if the MX they try to connect to is (a) busy (b) down (c) unaware of all the deliverable addresses (d) something else, that they'll refuse the incoming message. It also means that if the address that's trying to send mail to Verizon is something like [EMAIL PROTECTED], which is the address that the people at Thule Racks emit support traffic from, but which doesn't accept traffic, that Verizon will deny the message. (Yeah, this isn't very bright on Thule's part, either.) Whoops. This is bad for a whole bunch of reasons: two of the more obvious ones are (a) it's a pathetic anti-spam measure because ANY forged address ANYWHERE will do, and (b) it doesn't scale. Add to that (c) it abuses RCPT because apparently Verizon is unwilling to use VRFY and to accept the decision of many mail server operators to disable it. Oh, and (d) the behavior of their probe systems is nearly indistinguishable from that of spam-spewing zombies, which don't obey the SMTP protocol either. [ (b) is also how it lends itself to DoS attacks. Sure, Verizon could rate-limit the rate at which they make outbound connections, but then attacker X could impose significant delay on mail from domain Y just by forging a boatload of messages purporting to be from addresses in Y to Verizon. If Verizon rate-limits their outbound connections, then any real messages from Y will be stuck in the verification queue along with a kazillion forgeries. And beyond that: other people are foolishly adopting this callback nonsense as well. Slashdot carried a note the other day about a program _designed_ to do this. This allows attacker X to forge messages from domain Y to idiots I1, I2... In, for a very large n, and then stand back as all of them simultaneously try to connect to the MX's for domain Y. General principle: any anti-spam measure that generates more junk SMTP traffic at a time when we're drowning in it is probably a bad idea. ] One thing that's not clear is whether or not Verizon caches any of this information. Doing so might help cut down on DoS attack methods that involve them, but of course it doesn't do anything about those which leverage everyone else who's doing callbacks. And this is unfortunately, not the end of it. A lot of people, including me, are blocking particularly problematic spammer-controlled networks at (a) our border routers (b) our firewalls or (c) our mail servers. In other words, we not only won't accept mail from them, we won't even allow them to connect: we're blocking all IP traffic from them. This prevents them from spamming (at least directly from their own network space); it also prevents them from using their resources to build lists of deliverable addresses to sell to other spammers by poking
Re: [Fwd: zone transfers, a spammer's dream?]
On Thu, Dec 09, 2004 at 03:52:38AM +0200, Gadi Evron wrote: After a much too long introduction here comes my questions: is this deliberate? I can understand that Chad has bigger things to worry about than 24 domains getting on yet another spam list, but why Canada makes nearly half a million domains as easy to grab as this really is a mystery to me. It doesn't matter: that toothpaste came out of the tube a long time ago. Spammers have been buying and selling domain registration information for years, and anyone with cash-in-hand can buy as much of it as they want: either by TLD or by country or by category. Here's just a tiny tip-of-the-iceberg sample of the hundreds (?) of buyers, sellers, and brokers for WHOIS data and tools to manipulate it: http://www.bestextractor.com/ http://www.massmailsoftware.com/whois/ http://lists.freebsd.org/pipermail/freebsd-chat/2004-January/001942.html http://gnso.icann.org/mailing-lists/archives/dow1-2tf/msg00121.html http://www.sherpastore.com/store/page.cfm/2003 You can find as many more as you wish by using your favorite search engine to look for various combinations of extractor whois contact domain fresh leads market target email url and then just following the links back to their sites. (If the sites are down, don't worry: they'll be back soon, maybe with a new domain, maybe on a new web host.) How are they getting it? I don't know. Maybe they have deals with registrars; maybe they have deals with registrar employees; maybe they just breached registrar security. Or maybe something else entirely. However they're getting it, they're getting updates: in fact, updated information carries higher market value. And anyone who is so foolish as to believe that their private (obfuscated, cloaked, whatever) domain registration information is *really* private is in for a rude awakening. The irony of all this is that spammers already have all this information -- yet registrars have gone out of their way to make it as difficult as possible for everyone else to get it (rate-limiting queries and so on). ---Rsk
Re: [Fwd: zone transfers, a spammer's dream?]
On Thu, Dec 09, 2004 at 04:59:33PM +, Alex Bligh wrote: They clearly don't already have this information, or they wouldn't be a) offering to pay people for it b) continue to be trying to obtain it by data mining. Sure, some of them quite clearly don't. And so they're buying it from those who do, or acquiring it themselves. But lots of them have it, and have means to acquire updates to it when it suits them. This can't be surprising to anybody, given the amount of money being thrown around, the technical sophistication that's been displayed, and the usual assortment of security issues. Your argument [...] It's not an argument. I'm just reporting the news. Well, okay, I suppose I'm also arguing that there's no point in maintaining the pretense that registrars are keeping it all tucked away safe from [automated] prying eyes because it's obvious to everyone that *if* that was ever true, it stopped being true a long time ago. It's done. It's over. It's history. Any debate about how it _should_ have been kept tucked safe away has been rendered moot, and while it might still hold some philosophical interest, its practical value is nil. Note also that responsible registries do provide query access (automable where necessary) to registration data in a variety of different ways; not all make it as hard as possible for others to access it. shrug I think it's time to abandon the charade and simply publish all of it -- one static web page per domain, refreshed when the backing info changes. That would at least level the playing field, and pull the rug out from under those who are selling it. ---Rsk
Re: How many backbones here are filtering the makelovenotspam screensaver site?
The site has already been hacked/defaced, per full-disclosure. I can't personally verify or refute this because I can't reach it. ---Rsk
Re: How many backbones here are filtering the makelovenotspam scr eensaver site?
On Thu, Dec 02, 2004 at 04:18:52PM -0500, Hannigan, Martin wrote: Can you direct me toward a singluar entity of 1MM bots controlled by a single master? Nobody can, except the single master who's in control of same, and whoever that is -- if there is -- is unlikely to voluntarily share that information publicly. That's part of the problem: we know that that are huge numbers of them. How huge? 10e7 was probably a good estimate early in 2004, 10e8 is starting to look plausible given reported discovery rates. And the quasi-related problem of spyware/adware is exacerbating it: it's not like that cruft is exactly fastidious about making sure that it doesn't open the door to things worse than itself. We don't know how many there are. We probably can't know how many there are -- unless they do something to make themselves noticed, and surely those controlling them are smart enough to realize this and keep plenty in reserve. We can only know how many have made themselves visible, and even knowing that's hard. We don't know who's controlling them: are we up against 10 people or 10,000? We don't know everything they're doing with them. We don't know everything they're going to try to do with them. We don't know where they'll be next: they may move around (thanks to DHCP and similar), may show up in multiple places (thanks to VPNs) or they may *really* move around (laptops). We don't know how many are server systems as opposed to end-user systems. We don't know how to how to keep more from being created. We don't have a mechanism for un-zombie'ing the ones that already exist (other than laboriously going after them one at a time). We don't have a means to keep them from being re-zombied -- just as soon as the latest IE-bug-of-the-day hits Bugtraq. We don't have a viable way of controlling their actions other than disconnecting them entirely: sure, blocking outbound port 25 connections stops them from attempting spam delivery directly into mail servers, but surely nobody is so naive as to think those controlling these botnets are going to shrug their shoulders and give up when that happens? There are all kinds of other things they could be doing. *Are doing*. We don't have a clear understanding of who they're being controlled: are they quasi-autonomous? centrally directed? via a tree structure? do they phone home? are they operating p2p? all of the above? And so on. But we darn well should find out. ---Rsk
Re: Make love, not spam....
On Mon, Nov 29, 2004 at 02:14:01PM +, Fergie (Paul Ferguson) wrote: Techdirt has an article this morning that discusses how Lycos Europe is encouraging their users to run a screensaver that constantly pings servers suspected to be used by spammers and also suggests that In other words, it's a distributed denial of service attack against spammers by Lycos. Already noted as unbelievably stupid and dissected on Spam-L, but: getting into a bandwidth contest with spammers is a guaranteed loss, as they have an [essentially] infinite amount available to them for free. Apparently Lycos is unaware of zombies (including those hosting web sites), HTTP redirectors, rapidly-updating DNS, throwaway domains, and other facts of life in the spam sewer. ---Rsk
Re: Make love, not spam....
On Mon, Nov 29, 2004 at 10:54:03AM -0600, Jerry Pasker wrote: The big difference between Lycos Europe, and a script kiddie with zombies is that Lycos is mature enough to use restraint and not knock down websites with brute force. I have no idea whether they're mature enough. They're most certainly not knowledgeable enough, as they appear to have failed to account for: - zombie'd end-user systems (some of which will no doubt download this DoS tool) - web sites hosted on zombies (and serving requests sent to them either by rapidly-updating DNS or redirectors) - throwaway domains - hijacked ASNs among other standard spammer tricks, all of which can be used to deflect the attack or redirect it against third parties. But beyond that: this is a silly tactic. Spammers have as much [free, to them] bandwidth as they want. They're trying to drown people who own the ocean. ---Rsk
Re: Big List of network owners?
On Thu, Oct 28, 2004 at 10:30:43AM -0700, Randy Bush wrote: I have been looking around, but haven't found it yet.. Is there a text list of who owns what netblock worldwide? ISP/Location/Contact. I am not looking for anything searchable, but rather, a large, up to date list that I can import to a database.. in general, we try not to make life that easy for spammers and scammers Too late. Much, much too late. The spammers/scammers have long since gotten their hands on all of it. Whether because it was overtly sold to them, or covertly sold under-the-table by employees looking to pick up extra cash, or acquired via other means, they have it. Moreover, they're managing to get their hands on changes to it (as incidental experiments with recently-modified data indicate). Here's one example: $299 gets you a pocketful of CDROMs stuffed with data: http://www.promotionsite.net/ There are many more of these, of course, offering various compilations of data at various prices and in various formats. At this point, no purpose is served by maintaining the pretense that this data is private, in any sense. It would be better for everyone to simply publish it in a simple format (e.g. one static web page per doamin or network) so that everyone is on a level playing field. (As to the comment about registrars locking up more and more data: evidence is growing that at least a couple of registrars ARE the spammers they're registering domains for. Makes sense: if you're going to burn through thousands of domains, you might as well sell them to yourself cheaply.) ---Rsk
Re: Spammers Skirt IP Authentication Attempts [operational content at end]
[ Two replies in one. Last point has operational content. ] On Wed, Sep 08, 2004 at 01:52:59PM +0100, [EMAIL PROTECTED] wrote: I see that 56trf5.com is a real domain. Does this mean that the domain name registries and DNS are now being polluted with piles of garbage entries in the same way that Google searches have been polluted with tons of pages full of nothing but search keywords and ads? Absolutely. As one example out of thousands, there are at least 350 domains names of the form: aaefelb.info abbbafd.info acdfiaj.info aclbkcdc.info adkehgi.info aeamdgi.info that have been burned through by one currently-active group of spammers. Another group has about 16,700 domains (and counting) that I'm aware of. Note also the relationship betwen this proliferation, the zombies, and rapidly-updating DNS -- see below. On Wed, Sep 08, 2004 at 01:26:27PM -0500, Robert Bonomi wrote: I _do_ think that it is _a_step_ 'in the right direction'. I'd *love* to see SPF-type data returned on rDNS queries -- that would practically put the zombie spam-sending machines out of business. Not even close, I'm afraid. Yes, it would deal, to some extent, with direct-to-MX spam from them (*if* all the domain they were forging cooperated), but: 1. Nothing stops those zombies from sending out spam via the mail servers on the networks on which they're located. (And in the process, forging either the address of the former owner of the zombie or another user on the same network.) Before you say but the network operators would detect and fix that let me point out that zombie-generated spam has been epidemic for going on two years and many -- MANY --ISPs have yet to perform basic network triage that could mitigate much of this very quickly. It's reaching, I think, to expect that those same ISPs, who by now have grown quite comfortable sitting on their hands, would do anything about this. (I recently speculated n Spam-L that I was willing to bet that at least one such ISP would respond by plugging in more mail servers in order to alleviate the resulting congestion. Bruce Gingery promptly pointed out that this is a sucker bet: it's already happened.) 2A. Nothing stops those zombies from embedding spam payloads in ordinary messages sent by their [putative] users. Mail grandma? Spam grandma. 2B. Nothing stops those zombies from accepting spam payloads on port and writing it directly to disk in the place and format expected by the end user's mail client. No SMTP. No DNS. And with optional forged headers proving SPF/DomainKeys/etc. validity, just in case tools for checking those are in use. 3. Spammers have been using rapidly-updating DNS for quite some time in order to spread out their zombie-hosted web sites. With today's change they can now extend that up a level: nothing is stopping them from, say, registering 1000 domains, using 100,000 zombies to host copies of the content, and using rapidly-updating DNS to distribute the traffic (as well as making shutting it all down tedious). And as if that won't be enough fun (and here's the operational bit): 4. This is the point that I think a lot of us tend to overlook: arguably, SMTP spam from those zombies is the *least* of our problems. Those systems are under the control of an unknown number of unknown persons, and can be put to many more uses -- and already have. They've already been observed hosting spamvertised web sites [1], probing for open proxies, and participating in DDoS attacks. They represent an enormous computing resource that's effectively in the hands of The Bad Guys. (To put this in perspective, compare the estimated size of the zombie farm to the much-vaunted Google cluster in terms of CPU count, aggregate bandwidth, and network diversity.) And as I said previously, none of the three entities who could do anything about it (the zombies' former owners, consumer broadband ISPs, Microsoft) are willing to step up, admit there's a problem, and do whatever it takes to fix it. There is thus no reason at all to expect the problem to decrease; on the contrary, there is every reason (given the miserable track records of all concerned) to expect it to increase. ---Rsk [1] Including some with content of interest to the FTC, DEA, FBI, RIAA, MPAA, BSA, SPA and other people who have lawyers, guns and/or money. Makes sense from spammy's point of view: it's free, it's fault-tolerant and scalable (thanks to rapidly-updating DNS), and maybe someone else will get clobbered for it.
Re: Spammers Skirt IP Authentication Attempts
On Mon, Sep 06, 2004 at 07:19:01PM -0400, Mark Jeftovic wrote: I'm not sure the people behind this concept (SPF, RMX, et al) ever intended it to be the FUSSP, but a lot of the ensuing enthusiasm built it up to that. Consider that the people behind SPF made this statement (upon introducing it): Spam as a technical problem is solved by SPF. If, therefore, there is an overabundance of enthusiasm for that concept, then it seems to be very clear where full responsibility for that rests. I've *never* viewed SPF as an antispam methodology, but considered it an inevitable utility of the DNS system. Other methods are evolving to deal with spam, don't confuse them with what SPF is, which is essentially an authentication/identification framework that has the ability to mitigate one of the more popularly used spam obfuscation techniques. I'll agree with you that it may mitigate one of the more popularly used spammer obfuscation techniques, but that particular technique is a minor problem (considering spam/abuse as a whole) and not all that worth solving -- since *other* spammer obfuscation techniques which SPF (and DomainKeys and SenderID et.al.) don't address are already available and being used. (Why aren't they used more? Spammers haven't needed to. But if pressed, they will. Rapidly.) The bigger problem isn't the spammer obfuscation technique: it's the backscatter from all the mail systems which bounce instead of reject, Bouncing was not all *that* unreasonable until we started to operate in an environment with massive SMTP forgery (from spam/viruses/etc.) -- several years ago. It's now much more desirable to reject whenever possible, saving everyone bandwidth/cycles/grief. I don't think I like the idea of wallpapering over this problem with SPf/DomainKeys/etc.: I think I'd rather see those mail systems fixed to deal with the environment they find themselves in. [ Especially because the other spammer obfuscation techniques I referred to are available, and will be used if and when SPF or DomainKeys or any of these are widely deployed. Thus, mail systems will *still* inhabit an environment of massive forgery and should be prepared to deal with it as best they can...where I think one approach to that is don't make it any worse. ] Yeah, that may be a lot of work to complete -- although there are a myriad of simple techniques available to at least mitigate it, if not eliminate it entirely, and any relief would be welcome. That spammers are publishing SPF records is in no way indicative of an inherent flaw in SPF's objectives or a failure in its implementation, in fact, I welcome spammers who publish SPF data detailing the originating points of their email. If more known spam domains did this, a handy DNSBL could be constructed out of such data (with a few caveats of course, it would also potentially open the door to a type of DoS attack). RHSBLs (i.e. DNSBLs which list domain names instead of IP addresses, thus Right-Hand-Side BL's) have already been built. See www.surbl.org and www.ahbl.org, for example. But this tactic doesn't work -- as an anti-spam technique -- as well as we might hope, for three reasons: 1. Spammers have an [effectively] infinite supply of domains. This won't change because spammers who burn through domains rapidly (and thus need to purchase more) are some of the registrars' best customers. They're also early adopters of obfuscated registration, so much so that it's becoming increasingly likely that any domain thus registered is declaring intent to abuse. [1] 2. Spammers control a large -- as in tens of millions -- number of zombies. [2] This won't change because none of the three entities who could do anything about it (the zombies' former owners, consumer broadband ISPs, Microsoft) are willing to step up, admit there's a problem, and do whatever it takes to fix it. 3. Mail from a forged sender is operationally indistinguishable from mail from an unforged but unknown sender. [3] This won't change either. And because of #1, spammers have an essentially infinite number of domains to do it with, and because of #2, they have a large number of systems to do it from. And as a result, a *lot* of things that we could try, not just SPF/DomainKeys/et.al., just won't work. (Example? Oh, take the various hashcash ideas that have been floated: getting into a computing cycle contest with spammers is a guaranteed loss.) ---Rsk [1] For example, one group of pirate-software spammers appears to be burning through domains at the rate of one every 24-48 hours, and has been doing this for months. [2] It's hard to know how many systems are zombies, but tens of millions is probably the right order of magnitude. I did a back-of-the-envelope calculation a few months and came up with 10 to 20 million; Carl Hutzler (of AOL Policy Enforcement) provided an estimate of 50-100
Juno (United Online) contact phone?
Can someone please provide a phone number for the Juno (aka United Online) NOC? I already have the $1.95/minute support number (877-912-5866), so don't send that one. One of my client's emails are being bounced so I need to call them to find out why. Thanks! Rich
Re: Any way to P-T-P Distribute the RBL lists?
Drew Weaver [EMAIL PROTECTED] inquired: I know you all have probably already thought of this, but can anyone think of a feasible way to run a RBL list that does not have a single point of failure? Or any attackable entry? Fedex. Never underestimate the bandwidth of a station wagon loaded with DLT cartridges barreling along the highway at 70mph... Seriously, as has already been pointed out, the distribution side of the equation is the easy part. Server admins can use an out-of-band technique like ordinary dialup to get access to the blocklist. But generating the blocklist requires real-time reporting back to a central server. Even if the server is decentralized, it will still require a relatively small handful of accessable IP addresses. An out of band layer-2 network could be created for that at the peering points, so as to prevent outside attack. Probably worth doing among major ISPs. Wouldn't scale to end users, of course. But end users could still subscribe to the blocklist through periodic updates. The other obvious thing that could be done would work pretty much like caller ID: create a set of SMTP enhancements that allow email recipients to accept mail from those who have provided traceable ID to the ISPs that participate, and who have agreed to acceptable-use policies that place strict limits on bulk email. Wait, hasn't that been done? The pre-1987 ARPAnet? Oh yeah, we've outgrown that... Public humiliation might also work. Bring back the stockades so we can place spammers out front of courthouses everywhere. Too bad society's outgrown that too... -rich
Re: Verisign Responds
Leo Bicknell wrote: Looks like the lawsuits are going to be the ones to settle this dispute...anyone think there's a chance of ICANN pulling .COM and .NET from Verisign due to breach of contract? I think it's highly unlikely. Dave Stewart wrote: Oh, I dunno... ICANN has no teeth, so that won't happen. Shouldn't one of them smarty-pants attorneys file for an immediate injunction against VeriSign? Looks like plenty of technical arguments have been posted here which even the most dim-witted judge would understand vs. the public position taken by VeriSign that they should keep their cash register jingling with Sitefinder proceeds while the topic is studied and/or fought over for another 24 months. It's obvious to me that the technical arguments fade out quickly if the service is kept up and running. VeriSign cashes in while everyone else incurs the expense of implementing workarounds and bug-fixes. Once the workarounds are in place for more than a couple weeks, there isn't much impetus to put everything back the way it was. Has an injunction been requested? -rich
Re: What *are* they smoking?
VeriSign stands to gain financially, take a look at this excerpt from an AP news blurb published yesterday: Ben Turner, VeriSign's vice president for naming services, described the service as a way to improve overall usability of the Internet. People mistype .com and .net names some 20 million times daily, Turner said, and internal studies show the vast majority of users prefer a page like this than what they are getting today. ... Currently, Site Finder sends lost Web surfers to both regular search results and pay-for-placement listings, which are marked as such. Turner said VeriSign was partnering with two search companies he would not name. He would not disclose how much VeriSign would earn from those companies, with which it has revenue-sharing arrangements. Anyone find out any details of the contracts which VeriSign has apparently signed to profit from this little venture? -rich
Re: Nanog broken?
Hi all. I haven't seen any posts this morning, is the list broken or did everyone take a day off? enjoy the silence.