Re: Man, they'll try anything to hack your system...
Ben Scott wrote: On 1/28/06, Python <[EMAIL PROTECTED]> wrote: > A huge percentage of my spam comes from IP addresses with no > reverse lookup. A huge percentage of my spam contains the character 'e' in the message body. Both the above statements illustrate a classic spam-fighting mistake. It's *absolutely useless* to say "A huge percentage of my spam meets such-and-such a criteria" unless you can *also* say "A huge percentage of my ham does *not* meet the same criteria". If you've got that, you've got something useful. Otherwise, forget it. Anyone here have figures on what percentages of their ham violates standards or best practices? I don't, but based on anecdotal evidence from operators much larger then me, the answer is: "A lot." The key is to distinguish spam from ham, not merely to assign characteristics to spam. A cursory examination of two of my clients' mail logs over the last couple of days shows about 3/4 incoming email was spam. About 4/5 of the spam had bad MX or PTR. Of the 1/4 ham arriving, 1/5 had bad MX or PTR records. Since one of the clients was a school, that might skew the data. So, rejecting on bad senders' MX/PTR/etc., would reject 20% of the ham coming in there. Better than a few years back when 60% of senders of ham were rejected, but not acceptable yet. I was hoping to find it better than that, which is why I asked the question in the first place (being too lazy to check it out myself :-). YMMV. I would hope business email to be better. I guess I'll start using bad MX/PTR as a weight towards spam-hood, but not heavily weighted. I find bayesian filtering with a good user feedback loop is still the overall best solution. At Net Tech, for IMAP clients, I was working on a solution that used spamassassin and procmail to sort mail into folders, along with a few specially-named folders clients could move mail to identify it as "also-spam" or "not-spam". A cron job ran nightly to process those exceptions and train the filter. I left Net Tech before it went beyond the initial testing phase, but it looked promising. That's what I do for most of our clients. It works pretty well. In conjunction with Mozilla junk training, it helps alot. -- Dan Jenkins ([EMAIL PROTECTED]) Rastech Inc., Bedford, NH, USA --- 1-603-206-9951 *** Technical Support Excellence for over a quarter century ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
Fred wrote: I've run into that problem in the past. Forget emailing to anyone on AOL from your own "private" MX. FWIW, I've compiled a list I use. I run a few mail servers for clients on cable connections. Email sent to the domains below needs to be routed through the cable ISP's SMTP server. Whatever blacklist rcn.com or rr.com (I forget which) uses also includes DSL lines and T1s. So, a client with a T1 for ten years is blocked with a mildly snotty message. No reply to any contact attempts to resolve the matter, of course. Whenever I discover a new one, I add it to all the clients' lists. (Yes, I could simply relay all their mail, but I know of one ISP which apparently discards all undeliverable emails without any notice to the sender of the failure. I prefer to be more in control.) Here's my list from Postfix's transport table: adelphia.com adelphia.net aerosat.com amerprinting.com aol.com bellsouth.net can.xerox.com cthulhu.neutraldomain.org earthlink.net ed.state.nh.us hp.com juno.com lightshipmail.net mailer-useast.xerox.com mailstore1.secureserver.net mail.support.hp.com mail.tfsd.sk.ca monster.com moria.seul.org netscape.net netzero.com netzero.net prodigy.net rcn.com registeredsite.com rr.com sbcglobal.net schoolforge.net server.totalnetnh.net smtp.secureserver.net state.nh.us support.hp.com supportwebsite.com totalnetnh.net veeco.com verizon.net williampikedesign.com wmconnect.com xerox.com yahoo-inc.com -- Dan Jenkins ([EMAIL PROTECTED]) Rastech Inc., Bedford, NH, USA --- 1-603-206-9951 *** Technical Support Excellence for over a quarter century ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Monday 30 January 2006 11:30, Ben Scott wrote: > On 1/28/06, Python <[EMAIL PROTECTED]> wrote: ... > Anyone here have figures on what percentages of their ham violates > standards or best practices? I don't, but based on anecdotal evidence > from operators much larger then me, the answer is: "A lot." > > The key is to distinguish spam from ham, not merely to assign > characteristics to spam. > > > A good chunk of the remaining spam comes from roadrunner > > addresses, presumably rooted zombies. > > Blocking the mass-market consumer Internet feed ranges is reportedly > a rather more effective spam/ham separator then looking for standards > compliance. The vast majority of mail from such ranges is, in fact, > spam. Of course, there are a few people running their own MX on such > feeds which get rather annoyed by such actions, including people on > this list. Sadly, those are so few that they are often considered > "justifiable collateral damage". I've run into that problem in the past. Forget emailing to anyone on AOL from your own "private" MX. -Fred ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/30/06, Python <[EMAIL PROTECTED]> wrote: >> At Net Tech, for IMAP clients, I was working on a solution that used >> spamassassin and procmail to sort mail into folders, along with a few >> specially-named folders clients could move mail to identify it as >> "also-spam" or "not-spam". A cron job ran nightly to process those >> exceptions and train the filter. > > If this used "per user" rulesets on the server, then it also supports > the fact that one person's spam may well be someone else's ham. It did use per-user rule sets, and ran the training (a shell wrapper around "sa-learn" IIRC) on a per-user basis as well. The general idea was to give people the power of a Baysean SpamAssassin without the need to know anything about Unix commands or the shell, or even a web UI. The only "UI" was the same mail program (IMAP client) the user had been using all along. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/30/06, Neil Joseph Schelly <[EMAIL PROTECTED]> wrote: > ... the FQDN hostname you enter during install is properly put > in the hosts file with your "real" IP as opposed to the > localhost one. Doesn't work for systems without a static connection (e.g., dynamic IP clients, laptops that roam, etc). > Personally, I think the right way to fix it is from the mail server > perspective. I don't think they should pick the name they'll report > themselves as based on a localhost address, but based on the first > non-localhost address they bind to Doesn't work for systems with transient connections (e.g., dialup). I'm not trying to argue that Red Hat's choosen solution isn't broken, just illustrate that I don't know of any solution that doesn't break something for somebody. Pick your posion. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Monday 30 January 2006 11:52 am, Ben Scott wrote: > On 1/29/06, Bill McGonigle <[EMAIL PROTECTED]> wrote: > > I consider postfix to be working properly in this case and the fedora > > core installer to be the misbehaver here. > > Yah, that's been a long-time standard Red Hat behavior. I've always > corrected the /etc/hosts file soon after install. Debian does the same thing in my experience. I've often thought that it is pretty appropriate and that the FQDN hostname you enter during install is properly put in the hosts file with your "real" IP as opposed to the localhost one. Personally, I think the right way to fix it is from the mail server perspective. I don't think they should pick the name they'll report themselves as based on a localhost address, but based on the first non-localhost address they bind to -Neil ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Mon, 2006-01-30 at 11:30 -0500, Ben Scott wrote: > On 1/28/06, Python <[EMAIL PROTECTED]> wrote: > > A huge percentage of my spam comes from IP addresses with no reverse > > lookup. > > A huge percentage of my spam contains the character 'e' in the message body. > > Both the above statements illustrate a classic spam-fighting > mistake. It's *absolutely useless* to say "A huge percentage of my > spam meets such-and-such a criteria" unless you can *also* say "A huge > percentage of my ham does *not* meet the same criteria". If you've > got that, you've got something useful. Otherwise, forget it. Well since Jan 29 4 AM, the only email with no PTR for the source that got approved by spamassassin, was flagged correctly as SPAM by spambayes. That's from 277 smtp connections (out of 1291) with no PTR. Since my filtering is working OK, I have no real incentive to dig through all of my logs to find the exact false positive percentage. I can tell you it is small. Years ago mascomabank.com had no PTR record for their mail server and that was enough to keep me from using a general ban on mail servers with no PTR. Now that the filters are smarter, I have no need for a blanket ban. Still for an overloaded mail server that needs a simple rule to reduce the load, this could be useful. > > Anyone here have figures on what percentages of their ham violates > standards or best practices? I don't, but based on anecdotal evidence > from operators much larger then me, the answer is: "A lot." > > The key is to distinguish spam from ham, not merely to assign > characteristics to spam. > > > A good chunk of the remaining spam comes from roadrunner > > addresses, presumably rooted zombies. > > Blocking the mass-market consumer Internet feed ranges is reportedly > a rather more effective spam/ham separator then looking for standards > compliance. The vast majority of mail from such ranges is, in fact, > spam. Of course, there are a few people running their own MX on such > feeds which get rather annoyed by such actions, including people on > this list. Sadly, those are so few that they are often considered > "justifiable collateral damage". > > > spambayes provides an effective client spam filter (spambayes.org). The > > Outlook plugin is easy for Windows/Outlook folks. For everyone else, > > you'd probably run it as an imap/pop proxy. > > I find bayesian filtering with a good user feedback loop is still > the overall best solution. > > At Net Tech, for IMAP clients, I was working on a solution that used > spamassassin and procmail to sort mail into folders, along with a few > specially-named folders clients could move mail to identify it as > "also-spam" or "not-spam". A cron job ran nightly to process those > exceptions and train the filter. I left Net Tech before it went > beyond the initial testing phase, but it looked promising. > If this used "per user" rulesets on the server, then it also supports the fact that one person's spam may well be someone else's ham. > -- Ben > ___ > gnhlug-discuss mailing list > gnhlug-discuss@mail.gnhlug.org > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss -- Lloyd Kvam Venix Corp ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/29/06, Bill McGonigle <[EMAIL PROTECTED]> wrote: > I consider postfix to be working properly in this case and the fedora > core installer to be the misbehaver here. Yah, that's been a long-time standard Red Hat behavior. I've always corrected the /etc/hosts file soon after install. Red Hat is trying to compensate for the fact that, at install time, people often don't know enough about their system to give it a proper network config. Indeed, for some systems (e.g., roaming laptops), it's impossible to declare one config "proper". At the same time, if they don't associate the host name with an IP address and an FQDN, various common nix programs puke. I don't know if there is a "right" way to handle this problem for all cases. I suspect Red Hat just picked the breakage they found least bad. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/28/06, Python <[EMAIL PROTECTED]> wrote: > A huge percentage of my spam comes from IP addresses with no reverse > lookup. A huge percentage of my spam contains the character 'e' in the message body. Both the above statements illustrate a classic spam-fighting mistake. It's *absolutely useless* to say "A huge percentage of my spam meets such-and-such a criteria" unless you can *also* say "A huge percentage of my ham does *not* meet the same criteria". If you've got that, you've got something useful. Otherwise, forget it. Anyone here have figures on what percentages of their ham violates standards or best practices? I don't, but based on anecdotal evidence from operators much larger then me, the answer is: "A lot." The key is to distinguish spam from ham, not merely to assign characteristics to spam. > A good chunk of the remaining spam comes from roadrunner > addresses, presumably rooted zombies. Blocking the mass-market consumer Internet feed ranges is reportedly a rather more effective spam/ham separator then looking for standards compliance. The vast majority of mail from such ranges is, in fact, spam. Of course, there are a few people running their own MX on such feeds which get rather annoyed by such actions, including people on this list. Sadly, those are so few that they are often considered "justifiable collateral damage". > spambayes provides an effective client spam filter (spambayes.org). The > Outlook plugin is easy for Windows/Outlook folks. For everyone else, > you'd probably run it as an imap/pop proxy. I find bayesian filtering with a good user feedback loop is still the overall best solution. At Net Tech, for IMAP clients, I was working on a solution that used spamassassin and procmail to sort mail into folders, along with a few specially-named folders clients could move mail to identify it as "also-spam" or "not-spam". A cron job ran nightly to process those exceptions and train the filter. I left Net Tech before it went beyond the initial testing phase, but it looked promising. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Jan 28, 2006, at 15:24, Dan Jenkins wrote: What experience have folk had tightening restrictions, like lack of MX or A or mis-matched MX/PTR pair? Does it still cause a lot of false rejections? I turned on the postfix restriction: smtpd_sender_restrictions = reject_unknown_sender_domain a couple days ago and found some servers I take care of had a hosts line like: 127.0.0.1kermit localhost localhost.localdomain and so mails would go out claiming to be from <[EMAIL PROTECTED]> I fixed the hosts line and now mails come through. I consider postfix to be working properly in this case and the fedora core installer to be the misbehaver here. -Bill - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Sat, 2006-01-28 at 15:24 -0500, Dan Jenkins wrote: > What experience have folk had tightening restrictions, like lack of > MX > or A or mis-matched MX/PTR pair? > Does it still cause a lot of false rejections? A huge percentage of my spam comes from IP addresses with no reverse lookup. A good chunk of the remaining spam comes from roadrunner addresses, presumably rooted zombies. In the past I got into trouble rejecting senders without a PTR record, so even though it would clearly be effective, I fear it would still create some false positives. postfix/amavisd/clam are doing a good job of filtering spam, phishing and viruses, though I have a small number of email users. spambayes provides an effective client spam filter (spambayes.org). The Outlook plugin is easy for Windows/Outlook folks. For everyone else, you'd probably run it as an imap/pop proxy. (I run it as a pop proxy.) Getting it working for those people may be more of a tech support burden than you want to bear. -- Lloyd Kvam Venix Corp ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
Bill McGonigle wrote: Now that you mention it, I've seen few in the past few days with no MX records for the sending domain, even with a PTR record for the sending host. Not the same, but similarly strange and recent. I'd be happy to add a SpamAssassin or postfix rule to ignore mail from senders with no reachable MX for a reply. All that said, somebody might have just messed up their BIND views. A few years ago a client asked me to block more junk mail. I restricted based on lack of a MX record. Eight out of fourteen board members could no longer send email to the organization. (Three of them were technology companies, no less.) I haven't tightened things down since then. ;-) I keep wanting to, but my clients are concerned about not getting legitimate emails, even though they also want to reduce junk. What experience have folk had tightening restrictions, like lack of MX or A or mis-matched MX/PTR pair? Does it still cause a lot of false rejections? -- Dan Jenkins ([EMAIL PROTECTED]) Rastech Inc., Bedford, NH, USA --- 1-603-206-9951 *** Technical Support Excellence for over a quarter century ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Jan 27, 2006, at 14:03, Neil Schelly wrote: Technically, the SMTP spec says that a domain's blank address counts as the last MX record to try. So if gnhlug.org didn't have any MX records, then gnhlug.org itself should be tried. It may not be pretty, but according to the RFC, it's perfectly valid not to have an MX record for a domain. Ah, yes quite right. These few were very strange. They were spams that got through, which I usually look at to see how I can improve the ruleset. If I recall correctly, the mails came from: a host in example.com the host's IP had a PTR in example.com the From: field was from [EMAIL PROTECTED] example2.com had a whois record, NS records, but no A or MX (the NS records were outside example2.com) So, I thought, "well, what good is a mail that can't be replied to?" Of course, it was advertising a website in example3.com for pills to do something to your body so they weren't expecting any replies. Still, it's better than a Joe Job, and more easily disqualified by an MTA. I, of course, didn't have the right postfix rule in at the time. -Bill - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Friday 27 January 2006 01:30 pm, Bill McGonigle wrote: > On Jan 27, 2006, at 13:13, Ben Scott wrote: > > Anyone else seen this? Is it just net.stupidity on the part of some > > mail server operators somewhere, or are spammers/attackers trying > > something new? > > Now that you mention it, I've seen few in the past few days with no MX > records for the sending domain, even with a PTR record for the sending > host. Not the same, but similarly strange and recent. Technically, the SMTP spec says that a domain's blank address counts as the last MX record to try. So if gnhlug.org didn't have any MX records, then gnhlug.org itself should be tried. It may not be pretty, but according to the RFC, it's perfectly valid not to have an MX record for a domain. -N ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Friday 27 January 2006 01:13 pm, Ben Scott wrote: > Anyone else seen this? Is it just net.stupidity on the part of some > mail server operators somewhere, or are spammers/attackers trying > something new? I can imagine a scenario where this may be helpful to people. Can't imagine a way to misuse that sort of entry, but imagine that a company has a mail server on an internal IP address that receives incoming traffic from the outside world through NAT. So that external address gets NAT'd down to the internal address. Any servers on that internal network that try to send email to their domain, looking up the external IP, and try to connect. Because of the NAT, then that may be difficult to route properly. Even if they can the NAT to translate the stream to the mail server, the mail server will likely just reply directly to the internal address of the client server because that's the source of the incoming connection post-NAT. This will cause connections to fail and hang and all that stuff. If however, they have an MX record for both the internal and external IP addresses and don't setup anything to allow routing from inside to the public IPs, then those machines that might try to connect to it will fail to connect to the first MX record (the public IP) and fall back to the secondary MX record (internal). It's a hack, but if you don't have good DNS views setup or have difficult routing with NAT without the ability to do two-way NAT, then it should work. -N ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Jan 27, 2006, at 13:13, Ben Scott wrote: Anyone else seen this? Is it just net.stupidity on the part of some mail server operators somewhere, or are spammers/attackers trying something new? Now that you mention it, I've seen few in the past few days with no MX records for the sending domain, even with a PTR record for the sending host. Not the same, but similarly strange and recent. I'd be happy to add a SpamAssassin or postfix rule to ignore mail from senders with no reachable MX for a reply. All that said, somebody might have just messed up their BIND views. -Bill - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Jan 27, 2006, at 13:30, Bill McGonigle wrote: I'd be happy to add a SpamAssassin or postfix rule to ignore mail from senders with no reachable MX for a reply. Found this for postfix: smtpd_sender_restrictions = reject_unknown_sender_domain Reject the request when the sender mail address has no NS A or MX record. The unknown_address_reject_code parameter specifies the response code for rejected requests (default: 450). The response is always 450 in case of a temporary DNS error. I bet the code for this directive could be adapted pretty easily to check for the three private ranges - call it reject_private_sender_mx or some such. I'd give it a shot but I'm not on a current postfix quite yet. -Bill - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/27/06, Bill McGonigle <[EMAIL PROTECTED]> wrote: > Now that you mention it, I've seen few in the past few days with no MX > records for the sending domain, even with a PTR record for the sending > host. Not the same, but similarly strange and recent. Well, the RFCs say that if there is no MX record for a domain, but there is an A record, treat the A record as if one had specified it as an MX host. A lot of people aren't aware of that when they configure their www.foo.com domains and see mail attempts coming to their web server. > All that said, somebody might have just messed up their BIND views. Or just be dumb. I can see people adding their private address space servers and wondering why they don't get any mail. :-) -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Ben Scott wrote: | In the vein of "Strange things seen on the Internet", I'm noticing a |few domains have MXes pointing to hosts with addresses in RFC-1918 |private IP address space. I noticed this because our mail server was |trying to send DSN bounce messages to the domains, and so was trying |to connect to some hosts with bogon IP addresses. Our firewall caught |it and dropped it, and since it was from our server, it was |highlighted in a log report. | | Anyone else seen this? Is it just net.stupidity on the part of some |mail server operators somewhere, or are spammers/attackers trying |something new? I've seen that for several years. It appears to be a technique used by spammers/crackers. I suspect it is coupled with another attack/scoping vector, but I haven't delved very deeply. - --Bruce -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFD2mjX/TBScWXa5IgRArGuAJ9eIETIweC+IhwS32j+nDuOt8RO7gCdGzVM OOF+mFDHKtL0lykvOvnQnhM= =lcqK -END PGP SIGNATURE- ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Fri, Jan 27, 2006 at 01:13:46PM -0500, Ben Scott wrote: > In the vein of "Strange things seen on the Internet", I'm noticing a > few domains have MXes pointing to hosts with addresses in RFC-1918 > private IP address space. I noticed this because our mail server was > trying to send DSN bounce messages to the domains, and so was trying > to connect to some hosts with bogon IP addresses. Our firewall caught > it and dropped it, and since it was from our server, it was > highlighted in a log report. Perhaps the domains use mail only internally? So I could set up mail for crschmidt.net to point to a local mail host and only people at 'home' could deliver to that address usefully? -- Christopher Schmidt Web Developer ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
In the vein of "Strange things seen on the Internet", I'm noticing a few domains have MXes pointing to hosts with addresses in RFC-1918 private IP address space. I noticed this because our mail server was trying to send DSN bounce messages to the domains, and so was trying to connect to some hosts with bogon IP addresses. Our firewall caught it and dropped it, and since it was from our server, it was highlighted in a log report. Anyone else seen this? Is it just net.stupidity on the part of some mail server operators somewhere, or are spammers/attackers trying something new? -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Disallowing bots (was Re: Man, they'll try anything to hack your system...)
On Jan 27, 2006, at 10:41, Larry Cook wrote: How do you keep out the bad ones, the ones that ignore robots.txt? The bad ones usually _read_ robots.txt to figure out where the "juicy stuff" is. So you can do: Disallow: /robottrap.html And then have something tail your access log and instantly iptables anything that accesses /robottrap.html. -Bill - Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/27/06, Fred <[EMAIL PROTECTED]> wrote: On Thursday 26 January 2006 14:49, Thomas Charron wrote:> On 1/25/06, Paul Lussier < [EMAIL PROTECTED]> wrote:> > Oy.> > I almost never look at my apache logs. I probably should, but I> > don't. Tonight I was perusing them and noticing the activity in the> > access.log and was amazed at the things these people try:> I enjoy poking at any sort of logs for something connected to the net> now adays. The sheer amount of SSH attempts per day boggles the mind.Yep. Which is largely why I moved my ssh off of port 22. Ssh attacks went to zero after that. There's a V.1 vulnerability that was exploited once, so Inow make sure V.1 ssh is disabled. Personally, I'm just leaving it there. If the machine happens to get compromised, I have VMWare taking a snapshot each day, and I store a few days worth of snapshots, and one a week keep a snapshot that I'll keep for a month. If/when it gets compromised, I can just revert to a previous snapshot. Since the nature of the box is development, it should be ok. I've gotten comments from some others that watching the logs in realtime isvery "Matrix-like", though I have yet to see the blonds, brunettes, and red-heads in them! ;-) Hehehe. Well, sometimes, you can see where they're coming from, and I do tend to look at, say, french IPs wearing a little hat, etc.. ;-) Thomas
Disallowing bots (was Re: Man, they'll try anything to hack your system...)
Fred wrote: I've been debating if I should disallow all the other bots since they do put quite a load on my servers. My understanding is that you do this with robots.txt which the bots and spiders read. So it's basically an honor system that keeps out the good ones. How do you keep out the bad ones, the ones that ignore robots.txt? Larry ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Thursday 26 January 2006 14:49, Thomas Charron wrote: > On 1/25/06, Paul Lussier <[EMAIL PROTECTED]> wrote: > > Oy. > > I almost never look at my apache logs. I probably should, but I > > don't. Tonight I was perusing them and noticing the activity in the > > access.log and was amazed at the things these people try: > > I enjoy poking at any sort of logs for something connected to the net > now adays. The sheer amount of SSH attempts per day boggles the mind. > > A week or so ago I setup a new box on a VMWare instance, and just > forwarded port 22. > > *wham* Blions of login attempts from all over the world.. Yep. Which is largely why I moved my ssh off of port 22. Ssh attacks went to zero after that. There's a V.1 vulnerability that was exploited once, so I now make sure V.1 ssh is disabled. As far as apache logs, for my major websites, I do keep a "ssh [EMAIL PROTECTED] tail -f logfile" running for both access and error logs. The error logs are highly amusing. Constant queries for non-existent pages and directories for some of the most popular web-based software. It's nice, though, seeing the queries happen in realtime, as I learn a lot that way. Bot activity represents 90+% of the traffic, and there are all kinds of bots that I had never seen before, along with the usual Slurps, GoogleBots, and MSNBots that are my friends. I've been debating if I should disallow all the other bots since they do put quite a load on my servers. I've gotten comments from some others that watching the logs in realtime is very "Matrix-like", though I have yet to see the blonds, brunettes, and red-heads in them! ;-) -Fred ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On 1/25/06, Paul Lussier <[EMAIL PROTECTED]> wrote: Oy.I almost never look at my apache logs. I probably should, but Idon't. Tonight I was perusing them and noticing the activity in the access.log and was amazed at the things these people try: I enjoy poking at any sort of logs for something connected to the net now adays. The sheer amount of SSH attempts per day boggles the mind. A week or so ago I setup a new box on a VMWare instance, and just forwarded port 22. *wham* Blions of login attempts from all over the world.. Thomas
Re: Man, they'll try anything to hack your system...
On 1/25/06, Paul Lussier <[EMAIL PROTECTED]> wrote: > I almost never look at my apache logs. I probably should, but I > don't. You're supposed to look at the logs? > Tonight I was perusing them and noticing the activity in the > access.log and was amazed at the things these people try: Yah, these days, the Internet is pretty much under constant attack. The firewall at work is being probed constantly on all manner of ports for all manner of services. SMTP, SSH (complete with account/password guessing), HTTP, SMB, MS RPC, MS SQL, MySQL. They sweep the entire range, too, so our block of several IPs often gets probed all at once, for the same probe on each IP. When I was running a webserver, the logs were full of attempted exploits. Usually blind ones -- e.g., we saw tons of IIS probes on our Apache/Linux webservers. When I was doing turn ups of new systems all the time, I usually saw the first probes within minutes. Apparently, these days, a lot of spammers use active attacks in an effort to find new zombies to relay their spam for them. > The thing I find most amusing is that according to these logs, the > majority of attempts are from systems running ancient versions of IE > on NT 5.1. FWIW and FYI: MSIE 6.0 (plus various patches and updates) is the current release. NT 5.1 is Windows XP. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: Man, they'll try anything to hack your system...
On Wed, Jan 25, 2006 at 07:39:16PM -0500, Paul Lussier wrote: > > Oy. > > I almost never look at my apache logs. I probably should, but I > don't. Tonight I was perusing them and noticing the activity in the > access.log and was amazed at the things these people try: > > 84.58.131.234 - - "POST /drupal/xmlrpc.php HTTP/1.1" 404 364 "-" > "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;)" > 84.58.131.234 - - "POST /phpgroupware/xmlrpc.php HTTP/1.1" 404 370 "-" > "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;)" > 84.58.131.234 - - "POST /wordpress/xmlrpc.php HTTP/1.1" 404 367 "-" > "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;)" > 84.58.131.234 - - "POST /xmlrpc.php HTTP/1.1" 404 357 "-" "Mozilla/4.0 > (compatible; MSIE 6.0; Windows NT 5.1;)" > 84.58.131.234 - - "POST /xmlrpc/xmlrpc.php HTTP/1.1" 404 364 "-" > "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;)" > 84.58.131.234 - - "POST /xmlsrv/xmlrpc.php HTTP/1.1" 404 364 "-" "Mozilla/4.0 > (compatible; MSIE 6.0; Windows NT 5.1;)" > 24.60.72.162 - - "GET / HTTP/1.0" 302 370 "-" "-" > 82.96.96.3 - - "POST http://82.96.96.3:802/ HTTP/1.0" 302 369 "-" "-" > 82.96.96.3 - - "CONNECT 82.96.96.3:802 HTTP/1.0" 302 369 "-" "-" > 211.74.10.80 - - "CONNECT smtp.rol.ru:25 HTTP/1.0" 302 369 "-" "-" > > So, from these, I conclude I should probably not be running drupal > (whatever that is), wordpress, or anything with xmlrpc.php. The vulnerable version of the XMLRPC library was patched long ago: in the Wordpress 1.2/early 1.5 era, which is probably more than a year ago now. Drupal corrected it in the same timeframe. All these apps do/did use the exact same XML RPC library, but the patch was out long before the 'sploits were in force. The bug, for the record, was eval()ing stuff received over XML-RPC. How someone didn't catch that as a security hole in the *first* 3 years of the XMLRPC lib, I'll never know. > The thing I find most amusing is that according to these logs, the > majority of attempts are from systems running ancient versions of IE > on NT 5.1. *IF* that is to be believed, then what I should *really* > be doing is mapping those URLs in apache to something which will > provide them a virus to download and install :) I highly doubt that's the case. There's absolutely no reason to believe that these are actual browsers at all. Additionally, the placement of the ; after 5.1 is not typical in MSIE browser strings: I'm pretty sure that's an indicator of a bad UA set by a robot. Isn't NT5.1 some kind of version that is what XP actually is? or 2000... or something like that. Dunno. That's beyond my knowledge. But I wouldn't expect that these people are actually running browsers at the other end. (This could be more obvious if the timestamps were available from the logs: oftentimes you'll see a dozen of these 'sploits in a couple seconds, which is obviously an indicator of a non-human at the other end. > I'm tempted to try it :) First step would be to just write something that checks Javascript DOM capabilities and fires off an XMLHttpRequest with the requesting IP if it finds any. That way you could save yourself the trouble of finding/writing a decent virus if it never sets off any bells. -- Christopher Schmidt Web Developer ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss