Re: GoDaddy.com shuts down entire data center?

2006-01-17 Thread Chris Brenton

On Tue, 2006-01-17 at 03:19 -0500, Richard A Steenbergen wrote:
>
> The question at hand is, at what point does a registrar providing services 
> have an ethical or moral obligation to step in and do something when they 
> do encounter an excessive level of abuse by someone using their services? 

I think the issue here is not so much what happened, but how it
happened. The phishing problem was originally reported to godaddy and
then passed on to nectar on 1/9 (a Monday). It also appears the nectar
folks resolved the problem on the same day. After that point godaddy
continued to receive complains about the same problem and rather than
checking to see if the problem still existed, they just assumed it did.
Nectar appears to have even responded to godaddy stating that the
problem had already been resolved long before service was cut. 

IMHO the big issue is that service was cut on a Friday night just as the
only folks empowered to resolve the situation have left for the weekend.
I can see cutting service during a weekday morning to get the client's
attention on the matter. Doing it at a time when you know you'll be
causing a long term outage is just plain nasty.

HTH,
Chris




Re: DNS cache poisoning attacks -- are they real?

2005-03-29 Thread Chris Brenton

On Tue, 2005-03-29 at 08:49, Joe Maimon wrote:
>
> TIC: Apparently DNS was designed to be TOO reliable and failure resistant.

Ya, sometimes security and functionality don't mix all that well. ;-)

> As I understand from reading the referenced cert thread, there is the
> workaround which is disabling open recursion and then there are the
> potential fixes.

>From an admin perspective, this is the way to go. This is a real easy
fix with Bind via "allow-recursion". I don't play with MS DNS that
often, but the last time I looked recursion was an on/off switch. So of
the MS DNS box is Internet accessible, you are kind of hosed.

> 1) Registrars being required to verify Authority in delegated to
> nameservers (will this break any appreciated valid models?) before
> activating/changing delegation for zone.

Back in the InterNIC days this was SOP. This security check got lost
when things went commercial. Not sure if it would be possible to get it
back at this point. Too many registrars out there to try and enforce it.

IMHO lack of verification is only part of the problem (that has been
going on for years). What has made this more of an issue is registrars
that offer immediate response time to changes. This makes it far easier
to spammers to move to other stolen resources as required.

> Is it possible/practical to perpertrate this kind of hijak without
> registrar cooperation by first seeding resolver's caches and then
> changing NS on authoritative so that future caches will resolve from
> seeded resolvers? Is it possible to not even need to change the zone
> served NS/SOA and to use the hijaking values from the get-go?

Possibly. I ran into a bug/feature with Bind back in the 8.x days which
causes the resolver to go back to the last know authoritative server
when a TTL expires. On this plus side, this helps to reduce traffic on
the root name servers. On the down side, if the remote name server still
claims authority you will never find the new resource. I ran into the
problem moving a client from one ISP to another while the old ISP was
acting vindictive and refused to remove the old records. This of course
caused problems for their clients because when the TTLs expired they
kept going back to the old resource. Only way to clear it is a name
server restart at every domain looking up your info.

When I reported this the bug/feature was changed but I noticed a while
back (late 8.x maybe 9.0) that it is back. So if the purp can get you to
the wrong server only once it may be possible to keep you there.

> 2) Stricter settings as regards to all lame delegations -- SERVFAIL by
> default without recursion/caching attempts?

See my last post. IMHO there are too many broken but legitimate name
servers out there for this to be functional for most environments.

> Is all the local limitations on TTL values a good thing?

In this case, absolutely! With the default Bind setting, a TTL of
360 will get quietly truncated to a week. This means a trashed cache
will fix itself in one week rather than six.

HTH,
Chris




Re: DNS cache poisoning attacks -- are they real?

2005-03-29 Thread Chris Brenton

On Tue, 2005-03-29 at 05:37, Simon Waters wrote:
>
> The answers from a recursive servers won't be marked authoritative (AA bit 
> not 
> set), and so correct behaviour is to discard (BIND will log a lame server 
> message as well by default) these records.
> 
> If your recursive resolver doesn't discard these records, suggest you get one 
> that works ;)

In a perfect world, this might be a viable solution. The problem is
there are far too many legitimate but "broken" name servers out there.
On an average day I log well over 100 lame servers. If I broke this
functionality, my helpdesk would get flooded pretty quickly with angry
users.

HTH,
Chris




Re: DNS cache poisoning attacks -- are they real?

2005-03-28 Thread Chris Brenton

On Mon, 2005-03-28 at 01:04, John Payne wrote:
>
> And to Randy's point about problems with open recursive nameservers... 
> abusers have been known to cache "hijack".  Register a domain, 
> configure an authority with very large TTLs, seed it onto known open 
> recursive nameservers, update domain record to point to the open 
> recursive servers rather than their own.  Wammo, "bullet proof" dns 
> hosting.

I posted a note to Bugtraq on this process about a year and a half ago
as at the time I noticed a few spammers using this technique. Seems they
were doing this to protect their NS from retaliatory attacks. 
http://cert.uni-stuttgart.de/archive/bugtraq/2003/09/msg00164.html

Large TTLs only get you so far. All depends on the default setting of
max-cache-ttl. For Bind this is 7 days. MS DNS is 24 hours. Obviously
spammers can do a lot of damage in 7 days. :(

HTH,
Chris




Re: 30 Gmail Invites

2004-09-11 Thread Chris Brenton

On Sat, 2004-09-11 at 22:26, Paul Vixie wrote:
>
> i still can't understand why anyone would want a gmail account, free or not.

But..but..but..it's "special". You have to be invited. ;-)

C




Re: Very peculiar Telnet probing (possibly spoofed?)

2004-09-09 Thread Chris Brenton

On Thu, 2004-09-09 at 01:48, Jeff Kell wrote:
>
> I suspect but cannot prove 
> that the packets are being spoofed as we are dropping (not resetting) 
> the probes, yet they continue.  There are repeated probes from the same 
> IP address for about 15-20 minutes or more, then it moves along, but the 
> resulting router logs blocking them looks initially random (from SE Asia 
> sites). 

Could be an idle scan. If so, that would mean each of these sources are
just quiet hosts being leveraged by the real attacker.

Easiest way to tell is to return a SYN/ACK and look for TTL variances
between the original SYN and the resulting ACK. My experience has been
you all also see discrepancies in the IP ID. The SYN packets will be
non-predictable while the ACK packets will be predictable.

If it is an idle scan, the only way (I'm aware of) to identify the real
attacker is to work with the admin for the source IP. They'll see some
IP address probing the source IP at about the same interval you are
seeing the probes. _That_ source IP is the one you want to go after.

HTH,
Chris




Re: sms messaging without a net?

2004-08-03 Thread Chris Brenton

On Tue, 2004-08-03 at 05:17, Dan Hollis wrote:
>
> Does anyone know of a way to send SMS messages without an internet 
> connection?

Can you use chat?
http://www.ists.dartmouth.edu/IRIA/knowledge_base/swatch.htm

C




Re: ad.doubleclick.net missing from DNS?

2004-07-27 Thread Chris Brenton

On Tue, 2004-07-27 at 21:44, Paul Vixie wrote:
>
> on the one hand, you'd need a wildcard A RR at *.doubleclick.net to
> achieve this result.  the above text does not mention this, and leads
> one to believe that an apex A RR at doubleclick.net would have an effect.

Depends what you are trying to do. I'm perfectly happy to have
*.doubleclick.net return a "host not found", so a file with no A records
works fine for me.

> on the other hand, if you do this for a nameserver that your customers
> depend on, then there is probably some liability for either trademark
> infringement, tortious interference with prospective economic advantage,
> and the gods alone know what else.

Guess I don't see this as being any different than restricting access
based on port number or IP address. If your SLA empowers you to
selectively block traffic, what's the difference?

I agree however that at the ISP level its probably good practice to
_not_ do this. Then again, when I had my ISP I did filter out
doubleclick as well as certain IPs and ports. This was in the SLA
however so clients knew this was happening (and considered it a
"feature") before they signed up for service.

C




Re: ad.doubleclick.net missing from DNS?

2004-07-27 Thread Chris Brenton

On Tue, 2004-07-27 at 18:21, John Palmer wrote:
>
> Now the question is, can one easily block all of doubleclick.net by 127.0.0.1 in the 
> hosts file
> on a wincrash box? They appear to have ad, ad2, ad3, m2, m3.doubleclick.net. Anyone 
> know
> what hosts to list??? (ie: ad2, ad3 ... to ad???)

Been fixing that for a good 6 years now. Just setup your local name
servers to be authoritative for doubleclick.net and don't put any A
records in the file. Works like a charm. ;-)

Chris
 




Re: VeriSign's rapid DNS updates in .com/.net

2004-07-22 Thread Chris Brenton

On Thu, 2004-07-22 at 20:24, Robert L Mathews wrote:
>
> At 7/22/04 10:08 AM, Paul Vixie wrote:
> 
> >the primary beneficiaries of this
> >new functionality are spammers and other malfeasants
> 
> I think you're suggesting that such people will register domain names and 
> use them right away (which may be true), and that the lack of a delay 
> enables them to do things they couldn't otherwise do (which isn't).

Actually, this *does* make the spammer's lives a whole lote easier. See
my post to Bugtraq from about a year ago titled "Permitting recursion
can allow spammers to steal name server resources". It pretty much
hinges on the spammers finding an authority that will react quickly to
change requests.

Worst part is a year after that post I still see this activity taking
place. :(

HTH,
Chris




Re: Spamcop

2004-05-11 Thread Chris Brenton

On Tue, 2004-05-11 at 18:15, Laurence F. Sheldon, Jr. wrote:
>
> As an ex-admin, I have some "serious issues" about the way Spamcop
> works, but this argument is similar to one that says a credit reporting
> company has to prove that you are a deadbeat before reporting that
> several companies you do business with report that you are late with
> payments a lot.

I would agree with your analogy if Spamcop limited automatic reporting
to subset of the community. The problem is they do not. I can't call up
a credit agency and get them to automatically red mark your credit
report. I obviously can send pretty much anything to Spamcop, claim you
are a spammer and get them to act on that.

Cheers,
Chris




Re: Spamcop

2004-05-11 Thread Chris Brenton

On Tue, 2004-05-11 at 16:35, Guðbjörn S. Hreinsson wrote:
>
> Possible someone on the list didn't understand the content, didn't 
> realize this was sent via a mailing lists and submitted this as a spam 
> message to SPAMCOP. Less likely someone didn't know how to 
> get off the mailing list and this was the result. 
> 
> In both cases the submitter exercised bad judgement. But the mailing 
> list could be more helpful as well.

Further, Spamcop should implement some form of check to verify that the
e-mail is in fact spam before they go pointing the finger and/or
blocking mail servers. The problem of end users leveraging Spamcop to
get them off of mailing lists or a simple way of DoSsing a discussion
forum would become mute if some form of sanity checking was in place.

Cheers,
Chris




Re: Microsoft XP SP2 (was Re: Lazy network operators - NOT)

2004-04-19 Thread Chris Brenton

On Mon, 2004-04-19 at 06:27, Brian Russo wrote:
>
> There're a lot more 0-days than that.

Agreed. My ego has not grown so large as to think I've seen every 0-day.
;-) As I said however, the true number of 0-day is less than ground
noise compared to the number of systems that *could* have remained safe
with proper patching or configuring. 

> They just tend to remain 
> within a smaller community (typically the ones who discover it) and are 
> used carefully/intelligently for compromises, often for a very long 
> time.

Agreed. I think part of what makes 0-day easier to hide *is* the raw
quantity of preventable exploits that are taking place. In many ways we
have become numb to compromises so that the first response ends up being
"format and start over". If 0-day was a higher percentage, it would be
easier to catch them when they occur and do a proper forensic analysis. 

> Agreed, and even conscientious users screw up. I did this some months 
> ago when installing MS SQL Server Desktop Engine from a third-party CD 
> (packaged with software).


I guess I have a hard time blaming this type of thing on the end user.
Part of the fall out from making computers easier to use, is making it
easier for end users to shoot themselves in the foot. One of the
benefits of complexity is that it forces end user education. I'm
guessing that if you had to load SQL as a dependency you would have
caught your mistake before you made it. 

Let me give you an example of the easy to use interface thing. Back in
2000 I made it a personal goal to try and get the top 5 SMURF amplifier
sites shut down. I did some research to figure out what net blocks were
being used and started contacting the admins. Imagine my surprise when I
found out that 3 of the 5 _had_ a firewall. They had clicked their way
though configuring Firewall-1, didn't know they needed to tweak the
default property settings, and were letting through all ICMP
unrestricted and unlogged. 

IMHO its only getting worse. I teach a lot of perimeter security folks
and it seems like more and more of them are moving up the ranks without
ever seeing a command prompt. I actually had one guy argue that
everything in Windows is point and click and if you could not use a
mouse to do something, it was not worth doing. Again, I don't see this
as an end user problem because as an industry we've tried to make
security seem easier than it actually is. We want to make it like
driving a car when its more like flying an airplane. 


Cheers,
Chris




Re: Microsoft XP SP2 (was Re: Lazy network operators - NOT)

2004-04-19 Thread Chris Brenton

On Sun, 2004-04-18 at 23:16, Sean Donelan wrote:
>
> When the Morris worm was release, there wasn't a patch available.  Since
> then essentially every compromised computer has been via a vulnerability
> with a patch available or misconfiguration (or usually lack of
> configuration).

Key word here is "essentially". I've been involved with about a half
dozen compromises that have been true zero days. Granted that's less
than ground noise compared to what we are seeing today.

> As far as improvements go, Microsoft's XP SP2 is a great improvement.  If
> you have a Window's machine, implementing XP SP2 could help with a lot of
> the stupid vulnerabilities.  Unfortunately less than 50% of Internet users
> have XP.

This ends up being a catch 22 all the way around. Since MS has focused
on locking down XP, they have ended up focusing on a minimal market
share of the problem. With this in mind, I don't think we are going to
see things getting any better now that SP2 is out. For the end user
running 2000 or less, it ends up sounding like "we screwed up and sold
you an insecure product so now we want you to to give us more money in
order to fix the problem". A fix that addressed the problem in a more
universal fashion would have been cool. 

> Should ISPs start requiring their users to install Windows XP SP2?

Many folk have already commented on the economics of trying to require
this. I think technically it would be hard to implement as well. I've
done a lot of work with passive fingerprinting and from my observations
you don't see enough of a difference in the packet creation to tell the
difference between patched and unpatched systems. This leaves you with
active fingerprinting which may fail if a personal firewall is active,
or loading software on their system which is now a whole other support
nightmare. Lots of overhead for little gain in my opinion.

Also, don't underestimate a person's ability to shoot themselves in the
foot. Windows 2003 server, out of the box, is technically one of the
most secure operating systems out there because it ships with no open
listening ports. Based on the auditing I've done however, it ends up
being deployed even less secure than 2000 because a lot of admins end up
doing the "turn everything on to get it working" thing. An uneducated
end user is not something you can fix with a service pack.

Chris




Re: Firewall opinions wanted please

2004-03-18 Thread Chris Brenton

On Thu, 2004-03-18 at 15:26, Alexei Roudnev wrote:
>
> > A good firewall *should* be doing a whole lot more than that. It should
> Do not overestimate. Firewall can make a little more than just restrict
> access and inspect few (very  limited) protocols.

If this concerns you, just use a proxy instead of stateful inspection.
Even better, use both to leverage the speed of the packet filtering and
the application control of the proxy. Defense in-depth and all of that.

> It can not protect you from slow scans;

If a firewall can't stop a scan because its slow, then the firewall is
broken. If you are talking about detecting a port scan, then its a
matter of how you parse the data. I can easily detect port scans as slow
as 1 port/4 hours with Netfilter. I can push this out to 1 port/week if
the source IP is on my "potentially hostile" list.

> it can not protect you from SSL /
> SSH / (any other encrypted protocol) volnurabilities,

All depends on what you need. For example if you want to inspect
payload, terminate the tunnel at the firewall or some external device
(like an SSL accelerator) and then run the payload through a reverse
proxy. If its outright blocking you want, just inspect for the initial
handshake and drop as required. You only need to check the first couple
of ACK's to do this correctly.

> it can not protect your users from viruses in e-mail, etc etc.

I don't remember saying it would. What I do remember saying is that the
firewall could be used to help detect outbound activity if the internal
host becomes a zombie due to e-mail based viruses. 

> Very good level of details - 200 Mb of daily logs (IP, IP protocol = https).
> Any network statistics system can do it. Unfortunately, all this logs are
> 99% useless until you need forensics.

I guess its a matter of what you do with them. I personally find my
firewall logs *very* useful and can ID a wide range of suspicious
activity, even a few that are payload based despite the fact that the
firewall does not log the payload. As for review time, 200 MB takes me
maybe 20 minutes with my parsing script unless I find something *really*
interesting that I want to drill in on. Then the time factor comes down
to when my obsessive compulsive personality will let it go. ;-)

But then again I'm one of *those* geeks that finds log review to be a
fun way to spend a week night. I expect if I found it to be more of a
chore I would also find them to be less than useful.

> > perimeter. It should also be doing some level of content checking to
> In reality, I can count all useful things firewall can do. I can not count
> (it is infinite) numbers of things it can not do.

So basically your argument is "its good at some things but not others so
why bother?". Given that line of thinking, why bother with IDS because
it can't detect Ethernet CRC errors? Why bother running a virus scanner
because it can't keep your system patched. Why bother patching your
systems because that does not help add the fabric softener during the
rise cycle.

A firewall is a tool, no more no less. The capability of that tool is
90% dependent on the person wielding the tool. If you can only find a
limited number of applications for a firewall, I'm not surprised that
you don't find it all that useful. That does not mean the same is true
for the rest of us.

HTH,
C




Re: Firewall opinions wanted please

2004-03-18 Thread Chris Brenton

OK, I've tried to stay out of this, but...

On Thu, 2004-03-18 at 01:17, Alexei Roudnev wrote:
>
> No. let's imagine, that I have 4 hosts, without ANY security problems in
> software,

Exactly how do you *prove* there are zero security problems with any of
this software? I hate to say it, but a lot of the security issues we are
faced with today is because people thought they could build secure
software without worrying about a secure architecture. That's exactly
what you are doing here.

> Firewall protects other services from outside access.

A good firewall *should* be doing a whole lot more than that. It should
also be giving you a good level of detail about what crosses your
perimeter. It should also be doing some level of content checking to
protect the servers behind it. It should also be stopping and alerting
you if that Web server one day tries to TFTP out to the Internet. Etc.
etc. etc.

> Second. Not ANY network require FireWall. If network (grandma) do not allow
> any ACCESS fron Internet (grandma's netword do not allow access because it
> does not expose any IP device to outside network, using NAT for outgoing
> connections), it can live withourt any ACl and any firewall attributes 

 
Absolutely, because who cares if someone drops a call home Trojan on
Grandma's system (via e-mail or nasty URL) which turns the system into a
spam relay or a DDoS zombie. That would *never* happen, right?
 

Oh wait, I seem to remember that both of these problems are discussed on
at least a weekly basis in this forum. A firewall can't prevent the
above attacks, but it can give you a heads up that they happened.

> - and
> be as secure as production network with expansive firewall(s).

Dude, *please* don't take this as a slam, but you really need to come
more up to speed on this technology. 

> Key word is _ACCESS_. No ACCESS - no FireWall (cut wires).

Agreed, but in both of your examples were you say a firewall is not
needed, you include some level of access. 

Now if you are going to cut the wires and ensure there are no 802.11 or
dial-in access points, I'll agree so long as physical security is up to
snuff.

> One Way Access -
> many different devices plays role of firewall (PNAT translator, for example,
> makes 99.9% of the work).

Hey has anyone tested this lately? I beat up on a number of NAT only
firewalls about 3 years ago and found that approximately half could be
defeated by simply using loose source routing. Has anyone tested the
latest round up of products for this "functionality"?

HTH,
Chris




Re: Assymetric Routing / Statefull Inspection Firewall

2004-03-17 Thread Chris Brenton

On Tue, 2004-03-16 at 21:27, Mike Turner wrote:
>
> I am currently looking for a statefull inspection firewall
> that support asymmetric routing – is there such a product?

Sounds like you are looking for an SI firewall that supports full load
balancing, not just high availability. FW-1 does this, there may be
others as well.

Keep in mind that you can run into connectivity issues if you have big
pipe connections. You end up in a situation where outbound packets can
cross one firewall and replies can hit the other before the state info
has had time to sync. 

Beyond that, it should fit your need.
Chris




RE: ISS X-Force Security Advisories on Checkpoint Firewall-1 and VPN-1

2004-02-06 Thread Chris Brenton

On Fri, 2004-02-06 at 09:43, McBurnett, Jim wrote:
>
> If I was a real hacker, and I found the problem, might I also know the fix?
> And if I was really nice, would I give that fix to the vendor?
> Or could it be that a former Checkpoint employee is now an ISS employee?
> Or .?

In my experience, CP does not exactly have the best track record for
fixing problems. When I've informed them of vulnerabilities in the past
I've heard everything from "Well you would not have that problem if you
used the product the way it was intended" (remote overflow), to "we'll
fix that problem in the service release coming out 3 months from now
(DoS script kiddies were using against multiple sites, tool in the
wild).

Some vendors are slow no matter what you do. :(

C





Re: What's the best way to wiretap a network?

2004-01-18 Thread Chris Brenton

On Sat, 2004-01-17 at 21:08, Sean Donelan wrote:
>
> Assuming lawful purposes, what is the best way to tap a network
> undetectable

The best way to go undetectable is easy, run the sniffer without an IP
address. The best way to tap a network varies with your setup. If your
repeated, just plug in and go. If your switched (which most of us are),
you need to figure out how to get in the middle of the data stream you
want to monitor.

The best solution I've found is to use an Ethernet tap. It allows you to
piggy back off of an existing connection and monitor all the traffic
going to and from that system. Its pretty undetectable, does not use any
additional switch ports, and allows you to run full duplex. A number of
vendors sell them and a Google will give you sites on how to make them.

You can plug a mini-hub in line and use that as a tap point to monitor
the stream. Up side is its cheap and easy. Down side is you have to drop
to half duplex. Not a problem in most situations but in some the drop in
performance can be an issue.

Many switch vendors include a copy or mirror port that allows you to
replicate all traffic to and from a specific port, to some other port
where you can plug in your sniffer. Up side here is ease of
configuration. If you want to start monitoring a different port its a
simple configuration change within your switch. Down side is you could
end up missing packets (I've run into this myself). Seems when some/many
switches get busy the first thing they stop doing is copying packets to
the mirror port.

There are tools out there like Dsniff and Ettercap that allow you to
sniff in a switched environment. I recommend you avoid them because they
tend to either work or hose your network. You don't want to DoS
yourself. ;-)

>  to the surveillance subject, not missing any
> relevant data, and not exposing the installer to undue risk?

Sniffing is a passive function so its always possible you are going to
miss data. It all depends on the capabilities of the box recording the
packets.

As for "risk", that's always there as well. For example check the
Bugtraq archives and you are going to find exploits that work against
tools like Tcpdump and Snort. The attacks go after the way the software
processes the packet. So even if you are running without an IP address
its possible that someone with malicious intent can DoS the box.

HTH,
C 




Re: sniffer/promisc detector

2004-01-16 Thread Chris Brenton

On Fri, 2004-01-16 at 18:00, Gerald wrote:
>
> I should probably mention that I've already started looking at antisniff.
> I was hoping to find something that was currently maintained and still
> free while I investigate antisniff's capabilities.

Antisniff is still the best software based tool for the job. It has far
more extensive testing that anything else I've looked at. 

Of course the one blind spot with antisniff is that it can only detect
sniffers that have an IP address assigned to them. To detect these you
have to look at your switch statistics. Dead giveaway is a host
receiving traffic, but never transmitting. There is a false positive for
this condition however which is a hub plugged in the switch with no
hosts attached.

HTH,
C




Re: interesting article on Saudi Arabia's http filtering

2004-01-15 Thread Chris Brenton

On Thu, 2004-01-15 at 17:11, Eric Kuhnke wrote:
>
> And if he fails, what with the fact that sending all Internet traffic in 
> the whole country through a single chokepoint obviously creates a single 
> point of failure, all Net traffic in Saudi Arabia stops.

Not sure if its still the same setup, but up till 2 years ago this
consisted of 6 HTTP proxies sitting on the same class C. Best part was
they were _open_ proxies, so it was not uncommon to have a .net or .uk
attacker bounce through them on the way to attacking your site. 

Oh joy...
C



Re: Stopping ip range scans

2003-12-29 Thread Chris Brenton

On Mon, 2003-12-29 at 06:47, [EMAIL PROTECTED] wrote:
>  Recently (this year...) I've noticed increasing number of ip range scans 
> of various types that envolve one or more ports being probed for our
> entire ip blocks sequentially.

You're lucky. I've been watching this slowly ramp up for the last 10.
;-)

> At first I attributed all this to various 
> windows viruses, but I did some logging with callbacks soon after to 
> origin machine on ports 22 and 25) and substantial number of these scans 
> are coming from unix boxes.

Since no one (to my knowledge) has ever been arrested or sued over a
port scan, there is nothing holding back the script kiddies from doing
them at will. Heck, check the archives here and you will find a number
of posts where various people feel this is legitimate and justifiable
activity. 

>  I'm willing to tolerate some random traffic 
> like dns (although why would anybody send dns requests to ips that never 
> ever had any servers on them?)

Simplicity. Its easier to write a scanner that just hits every and/or
random IPs rather than troll to look for legitimate name servers. That
and the unadvertised ones are more likely to be vulnerable anyway.

>   So I'm wondering what are others doing on this regard? Is there any 
> router configuration or possibly intrusion detection software for linux 
> based firewall that can be used to notice as soon as this random scan 
> starts and block the ip on temporary basis?

Check out Bill Stearns Firebrick project:
http://www.stearns.org/firebricks/

Basically, these are plug-in rule sets for iptables. The three you are
interested in are ban30, checksban and catchmapper. If you want a little
less overhead, you can use catchmapreply. Also, the bogons module might
be interesting for an ISP environment. Note that the plength module
implements some of the fragment size limitations I was querying this
group about a few weeks back. :)

>  Best would be some kind of way 
> to immediatly detect the scan on the router and block it right there...
> Any people or networks tracking this down to perhaps alert each other?

Check:
http://www.dshield.org/

I *think* Johannes has even added the ability to query based on AS.

HTH,
C




Re: Minimum Internet MTU

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 19:10, Stephen J. Wilcox wrote:
>
> Whats IP over DNS, 512 bytes.. wouldnt want to kill my hotel access now huh?

LOL! 

And least we forget RFC 1149. I think this limits carrier pigeon MTU to
256 milligrams. ;-)

C






Re: Extreme spam testing

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 16:55, Andy Dills wrote:
>
> > This is going to sound really snippy, but who died and made then
> > god/goddess of the Internet? Where is the document trail empowering them
> > to be spam cops of the Internet with absolute authority to probe who
> > ever they see fit?
> 
> This is a can of worms with no answer. Who gives authority to IANA for
> that matter?

That was my point. I was responding to someone that was implying that
njabl was doing this for the benefit of everyone and thus had some
authority to do so. Obviously that's not the case.

> > Humm. This is something I have not run into before. Can you supply a URL
> > that explains how to relay mail though a Telnet or RADIUS server?
> 
> No, but I can supply a URL that explains how to change the port that proxy
> servers bind to. I don't think you actually need that, though.
> 
> You really think people who professionally hack servers and setup spam
> relay proxies put them on the standard ports?

Again, this was my point. Finding out if I have an exposed RADIUS server
is not really evidence that I'm running an open SMTP proxy. So where
does it stop? Scanning all 65K ports? Full OS fingerprinting to shun the
most compromised OS's? Maybe we insist on being provided with root
access to verify the box as being clean before we accept their e-mail?
This slope can get pretty scary.

> > LOL! I see, this is my fault because I actually take steps to secure my
> > environment. ;-)
> 
> No, but it is your fault for overreacting to your IDS.

I honestly don't think I over reacted. My original post labeled the
traffic as simply "interesting" and I stated I was posting it in case
others were interested and had not noticed it in their logs. No call to
arms, flames, or rants for wide spread blacklisting, just an FYI in case
others found the info useful.

> Security doesn't require an IDS. An IDS merely tells you who's checking
> your doorknobs to see if they're locked. If you do a good enough job
> keeping your doors locked, an IDS is little more than a touchy doorbell at
> 3 AM, being tripped by the wind.

An IDS is more like an empty box. One person may look at it and see a
simple storage device. Show it to a 5 year old however and it becomes a
boat, a plane, a car, a castle, etc. etc. etc. I mentioned in another
thread that I've caught plenty of 0-day stuff with my IDS. In other
words, stuff that had no known signatures or patches. Its also helped me
out in a fair amount of troubleshooting. Its all a matter of being
inventive and knowing what to look for. If you perceive your IDS to be
"little more than a touchy doorbell", I would highly recommend attending
SANS IDS training. It'll open your mind and show you a wealth of other
possibilities. 

Regards,
Chris




Re: Trace and Ping with Record Option on Cisco Routers

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 18:18, Crist Clark wrote:
> > [EMAIL PROTECTED] wrote:
> > 
> > Hey, Group.
> > 
> > In my production network, I'm trying to do some extended traces and pings with the 
> > record option turned on to see what route my packets take going and returning.  
> > It's not working.  If I do the extended traceroute or ping without the record 
> > option, it works fine.  There is a firewall (PIX) a few hops in front of the 
> > destination I'm trying to record the route for.  What part of ICMP is this that 
> > needs to be opened on the firewall to allow this to come back?  First time I'm 
> > coming across this.
> 
> It's not ICMP. It's the IP Options. Most firewalls will drop any
> packet with an IP Options.

Actually, I've done some testing on this. Most firewalls completely
ignore options and do not allow you to filter them. I've found quite a
few NAT firewalls that you can easily bounce over using lose source
routing.

The exceptions I've found are PIX, IPFilter, pf and iptables. Cisco IOS
has a new "ip options drop" command, but I have not tried it. Older
versions of IOS would let you do rudimentary option filtering via ACLs,
but I don't remember record route as being one of the possible options.

So I would also guess that the PIX is the culprit. You can try disabling
the options drop to see if that helps, and check the ACLS to see if
options are being filtered. Either way you can confirm this is where you
are losing the packet by taking some traces or checking the logs.

HTH,
C




Re: Extreme spam testing

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 13:46, Andy Dills wrote:
>
> > Agreed. My spam is _my_ problem and fixing it should not include making
> > it everyone else's problem. Forget whether its legal, its pretty
> > inconsiderate as many environments flag this stuff as malicious so it
> > triggers alerts.
> 
> Hmm...actually, YOUR spam is MY problem.
>  That's how this works.

Except its broken because the message in question was not spam. It was a
technical post to the NANOG mailing list that triggered the 100+ port
scan, as well as about 15 different variations attempting to relay
e-mail through my sever. Am I missing the Viagra ad that gets tacked to
the end of all NANOG posts? ;-)

> I applaud njabl.

I guess I don't. I can *totally* understand wanting to control the
amount of spam that an environment receives. I obviously deal with this
problem as well. I guess in my mind however I feel like the cost/burden
of dealing with that spam should be my responsibility, and I should not
expect legitimate organizations that are not part of the problem to
incur a financial impact due to my efforts.

For example their scans and probes would easily trigger an alert in most
environments (they did in mine and I'm by no means high security). This
means that a security analyst now has to check out the traces and see if
its a real attack. Then a decision has to be made as to how to deal with
it, which may well require (depending on policy) multiple resources. So
I end up spending money so njabl can try and reduce the amount of spam
they receive. Oh joy, oh rapture.

Also, I don't see this as a totally effective solution. This works if
the spam comes through an open relay, but fails if it does not. That
means you need some other layer of checking to deal with the non-relay
spam. Something like Spamassassin for example. Of course Spamassassin
can also easily deal with the open relay spam as well, without requiring
an obtrusive check back system.

Finally, I used to blacklist known spammer's IP addresses as well, but
stopped after I crunched some numbers. When you blacklist the spammers
IP, they don't give up and remove your address, they just keep trying.
The bandwidth lost to the retries (on average) is greater than the
bandwidth used to transmit the actual spam. So blocking spam saves you
some temporary disk space, but increase network utilization.

> If you have open relays, proxies, or whatnot, I want to know about it, so
> I can reject all mail from you.

Again, except I don't. If I transmit spam, I should expect to be poked
and probed. When one receives an unprovoked probe/attack like this, the
target is going to assume the source is hostile. Its not till you spend
time looking into it (in other words, burn $$$ on resources) that you
figure out that someone actually considers this pattern to be "a
feature".

>  If we have a single entitity that does all
> this scanning, we as individual entities do not need to scan ourselves.

This is going to sound really snippy, but who died and made then
god/goddess of the Internet? Where is the document trail empowering them
to be spam cops of the Internet with absolute authority to probe who
ever they see fit? 

Also, it does not quite work out that they are the only ones doing it
(see earlier thread on AOL). They just seem to be more aggressive than
most. 

> Therefore, njabl is REDUCING the number of people scanning your netblocks
> for proxies. If they didn't do it for me, I'd be doing it myself, along
> with numerous other networks.

I guess we can "agree to disagree" here as I'm not a "ends justifies the
means" type of person. I want to reduce the amount of spam I receive as
well, and certainly would not mind making the spammer's lives a bit more
difficult. I don't want to do that however at the cost of
annoying/sucking money out of legitimate Internet users.

> > As a follow up, it also looks like they did a pretty aggressive port
> > scan of my system. Not sure how checking Telnet, X-Windows or RADIUS
> > will tell them if I'm a spammer, but what ever.
> 
> proxies, proxies, proxies.

Humm. This is something I have not run into before. Can you supply a URL
that explains how to relay mail though a Telnet or RADIUS server?

>  But like you say, "whatever". It's not like you
> would have noticed if you didn't obsessively scan your logfiles or have an
> IDS.

LOL! I see, this is my fault because I actually take steps to secure my
environment. ;-)

Thanks for the chuckle,
C




Re: Extreme spam testing

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 11:04, Etaoin Shrdlu wrote:
>
> Um, welcome to the world of spam nazis.

I've seen returning MX queries and even source address validation, but
never anything this excessive up till now. IMHO its hard to tell if they
are looking for spam relays to reduce spam, or because they are looking
to generate some spam themselves. ;-)

>  I hate spammers. I loathe and
> despise them. I hate njabl even more.

Agreed. My spam is _my_ problem and fixing it should not include making
it everyone else's problem. Forget whether its legal, its pretty
inconsiderate as many environments flag this stuff as malicious so it
triggers alerts.

>  The last time I called their ISP to
> complain, I was assured that I must have done something to deserve the
> aggressive testing.

As a follow up, it also looks like they did a pretty aggressive port
scan of my system. Not sure how checking Telnet, X-Windows or RADIUS
will tell them if I'm a spammer, but what ever.

>  Well, nope, I didn't, and I don't. They just did it
> again, and by "it", I mean that they hit every machine in my little
> netblock

I've tweaked my perimeter to return host-unreachables to all packets
originating from their network (rate limited of course). If that stops
them from accepting me mail, oh well I'll survive.

Thanks for the confirmation,
C




Re: Minimum Internet MTU

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 09:36, Robert E. Seastrom wrote:
>
> You mean like everyone who's still running TCP/IP over AX.25 in the
> ham radio community? 

I actually thought of this, but only as an end-point which would not
generate fragmented packets. I didn't consider that people could be
using Linux or what ever to hide an Ethernet network behind the link,
which of course would fragment the stream.

Looks like I need to drop my threshold to < 500. This is exactly what I
needed, thanks!

> What are you trying to accomplish by killing off the fragments?

My experience has been that attackers still like to use fragmentation as
a method of covering their tracks. No they do not do it all the time,
but I've noticed that a lot of the time when I've been able to catch
0-day stuff its fragmented in order to help stealth it.

So what I'm looking for is a definable limit to be able to say "a
non-last fragment below this size is very likely to be hostile and
should be handled accordingly". Running with less than 500 bytes is
still cool, as the stuff I've found is always less than 100 bytes. I'm
just looking to add as much "slop" as possible to catch what I have not
thought of without triggering false positives.

So unless someone knows of a case below 500 bytes, I think I'm all set.
Thanks for the great feedback.

C




Extreme spam testing

2003-12-22 Thread Chris Brenton

Greets again all,

I noticed something kind of interesting when I made my last post to
NANOG. I can understand people wanting to do spam checking, but IMHO
this is a bit excessive and inconsiderate. 

I'm guessing njabl.org is doing this to everyone who posts to the list,
so I thought others might want to know about it in case they have not
noticed it in their own logs. BTW, if you are curious about the
"spammers_waste_oxygen" portion, that was grabbed off my SMTP banner.

Cheers,
C

***

Dec 22 08:21:50 mailgate sendmail[492]: hBMDLnHS000492:
before-reporting-as-abuse-please-see-www.njabl.org [209.208.0.15] did
not issue MAIL/EXPN/VRFY/ETRN during connection to MTA
Dec 22 08:21:50 mailgate sendmail[495]: hBMDLoHS000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:50 mailgate sendmail[495]: hBMDLoHT000495:
ruleset=check_mail, arg1=<[EMAIL PROTECTED];>,
relay=rt.njabl.org [209.208.0.15], reject=553 5.1.8
<[EMAIL PROTECTED];>... Domain of sender address
[EMAIL PROTECTED] does not exist
Dec 22 08:21:50 mailgate sendmail[495]: hBMDLoHU000495:
ruleset=check_mail,
arg1=<"[EMAIL PROTECTED]"@spammers_waste_oxygen;>,
relay=rt.njabl.org [209.208.0.15], reject=553 5.1.8
<"[EMAIL PROTECTED]"@spammers_waste_oxygen;>... Domain of
sender address [EMAIL PROTECTED]@spammers_waste_oxygen does not
exist
Dec 22 08:21:51 mailgate sendmail[495]: hBMDLoHV000495:
ruleset=check_mail, arg1=, relay=rt.njabl.org
[209.208.0.15], reject=553 5.5.4 ... Domain name required
for sender address relaytestsend
Dec 22 08:21:51 mailgate sendmail[495]: hBMDLoHW000495:
ruleset=check_mail, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=553 5.5.4 <[EMAIL PROTECTED]>... Real
domain name required for sender address
Dec 22 08:21:51 mailgate sendmail[495]: hBMDLoHX000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:51 mailgate sendmail[495]: hBMDLoHY000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:51 mailgate sendmail[495]: hBMDLoHZ000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:52 mailgate sendmail[495]: hBMDLoHa000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:52 mailgate sendmail[495]: hBMDLoHb000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:52 mailgate sendmail[495]: hBMDLoHc000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:52 mailgate sendmail[495]: hBMDLoHd000495:
ruleset=check_rcpt, arg1=<[EMAIL PROTECTED]>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1 <[EMAIL PROTECTED]>... Relaying
denied
Dec 22 08:21:52 mailgate sendmail[495]: hBMDLoHe000495:
ruleset=check_mail, arg1=<[EMAIL PROTECTED];>,
relay=rt.njabl.org [209.208.0.15], reject=553 5.1.8
<[EMAIL PROTECTED];>... Domain of sender address
[EMAIL PROTECTED] does not exist
Dec 22 08:21:53 mailgate sendmail[495]: hBMDLoHf000495:
ruleset=check_rcpt,
arg1=<[EMAIL PROTECTED];>, relay=rt.njabl.org
[209.208.0.15], reject=550 5.7.1
<[EMAIL PROTECTED];>... Relaying denied
Dec 22 08:21:53 mailgate sendmail[495]: hBMDLoHh000495:
ruleset=check_mail, arg1=<[EMAIL PROTECTED];>,
relay=rt.njabl.org [209.208.0.15], reject=553 5.1.8
<[EMAIL PROTECTED];>... Domain of sender address
[EMAIL PROTECTED] does not exist




Re: Minimum Internet MTU

2003-12-22 Thread Chris Brenton

On Mon, 2003-12-22 at 08:27, bill wrote:
>
> > Is is safe to assume
> > that 99.9% of the Internet is running on 1500 MTU or higher these days? 
> 
>   define safe. 


I agree, this is a bit of a loaded question. I guess by safe I mean "Is
anyone aware of a specific link or set of conditions that could cause
_legitimate_ non-last fragmented packets on the wire that have a size of
less than 1200 bytes". I agree there are bound to be inexperienced users
who have shot themselves in the foot and tweaked their personal system
lower than this threshold, thus my 99.9% requirement.

I had a couple of people e-mail me about Cisco's Pre-fragmentation
feature for IPSec. If I understand it correctly (someone please correct
me if I'm wrong), its the original datagrams that get fragmented. Thus
its the encapsulated payload that will have MF set, not the actual IPSec
packet seen on the wire. With this in mind, the exposed IP header would
just show it to be a small packet, not a small fragment. Am I off here?

>   now that you mention it...  :)
>   btw, what will your IDS/firewall do when presented w/ a 9k mtu?

Depends on the setup. I've actually been running this as a set of IDS
rules for a few years and have detected a few 0-day events this way. I
have not hit any false positives that I'm aware of, but then again we're
only talking my small view of the Internet. Thus my question to the
group. If anyone is going to know the answer its this crew. :)

I'm looking to move the rules into the firewall/IPS realm, but want to
be sure before I do as now we are talking blocking the traffic rather
than just recording it. First implementation would be a set of iptables
rules, with pf shortly after. I have not seen any commercial firewalls
with this type of capability, but I have not had a chance to focus on
this aspect too deeply as of yet. Checkpoint has possibilities, but
implementation would probably be beyond the typical point and click
admin.

Thanks for all the great feedback!
C




Minimum Internet MTU

2003-12-22 Thread Chris Brenton

Greetings all,

I'm working with a few folks on firewall and IDS rules that will flag
suspicious fragmented traffic. I know the legal minimum of a
non-terminal fragment is 28 bytes, but given non-terminals should
reflect the MTU of the topologies along the link, this number is far
lower than what I expect you should see for legitimate fragmentation in
the wild.

A few years back I noted some 512-536 MTU links in ASIA. I've been doing
some testing and can't seem to find them anymore. Is is safe to assume
that 99.9% of the Internet is running on 1500 MTU or higher these days? 

I know some people artificially set their end point MTU a bit lower
(like 1400) to deal with things like having their traffic encapsulated
by GRE or IPSec. With this in mind, would we be safe to flag/drop/what
ever all fragments smaller than 1200 bytes that are not last fragments
(i.e., more fragments is still set)? Does anyone maintain, or is aware,
of links that would not meet this 1200 MTU?

Any and all feedback would be greatly appreciated,
C




Re: Firewall stateful handling of ICMP packets

2003-12-04 Thread Chris Brenton

On Wed, 2003-12-03 at 22:09, Jamie Reid wrote:
> 
> This was a problem when filtering Nachi while it pinged networks
> to their knees. 

I think the problem was exasperated by the fact that some ISP's
responded by blocking _all_ ICMP. Its bad enough that this killed their
own ability to see if their hardware was up or down, it also amplified
traffic as ICMP errors were no longer returned (due to retransmits and
now being prime address space for spoofing).

> Sometimes I wonder if there is any legitimate reason to allow 
> pings from users at all.

This all comes down to the SLA. For home users, you can probably get
away with it. For business level connections, "not knowing" and killing
the service can have financial repercussions.

Of course we're talking about addressing a symptom, not a problem. The
"problem" is not ICMP Type 8's, the problem is systems that are
unprotected and users that can't figure out when the box has been
whacked. Personally, I was bummed that my all Linux/BSD network could
not use Type 8's because my upstream was filtering them due to Windows
boxes getting whacked with Nachi.

A couple of other people mentioned rate limiting. That is probably the
best option. Of course supporting it can drive up hardware costs.

> If the user really needed to use
> ping, that is, if they were in a position to do anything about the
> results of the ping tests, then they would know enough to 
> use traceroute in UDP mode or some other tool. 

Could be UDP is blocked while type 8's are not. Could be they are on a
Windows box which uses type 8's for tracing rather than UDP. 

> There are lots of other useful ICMP types to handle all
> the other ICMP needs, but ping seems to be something
> that was created for the convenience of a kind of user
> that is effectively extinct in todays Internet.  

There are a *ton* of companies out there that monitor system up status
via Type 8's over the Internet. I'm not saying its a good idea or that
there are not other options. Just that it would break a ton of business
models if it goes away.

> ICMP echo is unique among ICMP types in that it is the
> only one that elicits it's own response.

What about subnet mask request? time stamp request? Information request?
There are probably others as well.

> There is nothing that echos
> do that SNMP (I know, I know) and traceroute don't
> accomplish in a more controlled fashion, no? 

EEEK! SNMP opens up a point of accessing code running on the device. As
for traceroute, if all I'm interested in is the endpoint, I've generated
a ton of unnecessarily traffic. Given an average 15 hop distance between
Internet hosts, that would be 90 traceroute packets to do the job, Vs.
Ping only needing 2. Sure I can tweak the start and stop hop count
(actually Windows does not let you set the min starting hop) to drop
this quantity, but how many users are going to bother?

> It would kill alot of DDoS attacks and render their zombie 
> networks useless,

I seem to remember we said the same thing about killing Smurf amplifier
networks. The black hats just changed tactics and started whacking a ton
of hosts. Killing Type 8's will not cure "the problem", as the problem
is totally capable of mutating into something that will still be
effective (like SYN flooding).  

HTH,
C




Re: Server mirroring

2003-11-28 Thread Chris Brenton

On Thu, 2003-11-27 at 23:57, Stephen Miller wrote:
> check out the following link for info on rsync:
> 
> http://samba.anu.edu.au/rsync/

Bill Stearns has some *excellent* information on combining rsync with
SSH & public/private keys if you need to backup the data in a secure
fashion. 

http://www.stearns.org/rsync-backup/

If all you need is straight rsync, he has a script for that too:

http://www.stearns.org/rsync-mirror/

HTH,
C




Re: Open source traffic shaper experiences? (was Re: looking for a review of traffic shapers)

2003-11-25 Thread Chris Brenton

On Tue, 2003-11-25 at 12:38, [EMAIL PROTECTED] wrote:
> 
> Is anyone on the NANOG list aware of a disk-less Linux solution? One might
> imagine a Knoppix-like bootable CD image (perhaps CD-RW, so config files
> could be updated) that would turn an inexpensive Linux box into an
> effective traffic shaping device

Sounds like you are looking for LART:
http://www.lartc.org/

I would expect you could setup your own CD image if that is part of your
need. 

HTH,
C




Re: AOL fixing Microsoft default settings

2003-10-24 Thread Chris Brenton

On Fri, 2003-10-24 at 00:22, Jared Mauch wrote:
> On Fri, Oct 24, 2003 at 12:13:59AM -0400, Sean Donelan wrote:
> > http://www.securityfocus.com/news/7278
> > 
> > How many other ISPs intend to follow AOL's practice and use their
> > connection support software to fix the defaults on their customer's
> > Windows computers?
> 
>   Sounds good to me.  The potential for these users
> to be less-than-educated enough about the existance of
> this "feature" means that the potential for this to
> increase the overall network security is a good thing.

Does anyone know anything about what security has been put in place for
this? These quotes troubled me:

"So two weeks ago, AOL began turning the feature off on customers'
behalf, using a self-updating mechanism in AOL's software."

"Users are not notified of the change..."

Is this "mechanism" an SSL connection? HTTP in the clear? AIM? Is it
exploitable?

I think the intention is admirable, but it has the potential to be a
real nightmare if implemented incorrectly. The fact that it can all
happen without the knowledge of the end user means even a savvy users
could get whacked if the underlying structure is insecure.

C








Re: Fw: Re: Block all servers?

2003-10-15 Thread Chris Brenton

On Tue, 2003-10-14 at 21:12, Fred Heutte wrote:
>
>   IPSec prevents packet modification to thwart man-in-the-middle
>   attacks. However, this strong security feature also generates
>   operational problems. NAT frequently breaks IPSec because it
>   modifies packets by substituting public IP addresses for
>   private ones. Many IPSec products implement NAT traversal
>   extensions, but support for this feature isn't universal, and
>   interoperability is still an issue.

IMHO this is a bit misleading as it implies you need some kind of
special gateway with "NAT traversal extensions" to get IPSec to work.
This is not exactly true as only AH checks the IP header. If you stick
with just ESP you can re-write IPs without failing authentication.

True this only works for one to one NAT. Many to one NAT will still
break IPSec, even if ESP is used alone. This is a functionality issue
however (IPSec using a fixed source port of 500), rather than a
"preventing packet modification to thwart man-in-the-middle attacks"
thing.

> And Phifer notes later that one of the critical issues with SSL 
> VPNs is whether you want to "Webify" everything.  For all
> of us (I hope), the net is much more than just port 80.

Not so sure you really have to. This is true if you are running things
like pop3s, imaps, etc. but you can also go with something like stunnel
which is pretty close to IPSec. The biggest drawback is no native
support for UDP which makes using internal DNS a bit of a bear.

Cheers,
C