On Mon, 24 Nov 2008, [EMAIL PROTECTED] wrote:
> On Sat, 22 Nov 2008, Lamont Granquist wrote:
>> On Wed, 19 Nov 2008, [EMAIL PROTECTED] wrote:
>> If you've got a million different zones, try to reduce it down as much as 
>> possible.  For the average website I'd suggest simply having four zones 
>> (internet, production, corporate and credit-cards/PCI).  Even having 
>> seperate zones between databases and app servers isn't scalable, and 
>> ultimately doesn't offer any security (all i have to do is find one 
>> explitable hole in a firewall with hundreds of holes in it as an attacker).
>
> you are assuming a single website. when there are many (potentially 
> thousands) of different websites, and several different product lines there 
> are reasons to not lump them all togeather where a flaw in one exposes all 
> teh data for all the others.

If all those websites are internal, I would still suggest not breaking 
them up.  Since 2001, all I've worked at are two large sites (30,000 
servers and 2,000 servers) which have multiple SOA-oriented websites and 
different lines of business with different web presences.  The 30,000 
server one that didn't attempt to do any internal zoning (or actually they 
did attempt to do internal zoning a couple of times and failed miserably 
before it got to a large scale -- ask Trey about that one) was much better 
run in this aspect than the 2,000 server one which is still chasing after 
the idea of firewalling everything, and running into the reality that in 
SOA-oriented architectures, when one "website" solves an internal problem 
it often winds up being used by the other internal customers, so that the 
isolation of dataflow between the websites is largely a myth.

>> I'd even suggest the heretical idea of allowing all your servers to connect 
>> out to the internet.  The practice of blocking all outbound connections 
>> which have not been explicitly allowed is a prevent control which primarily 
>> _prevents_ internal business from occuring.  It would be far better to turn 
>> that into a detect control and monitor border traffic for outbound 
>> anomalies and get out of the way of internal software being able to use the 
>> Internet.
>
> If you can allocate the manpower to properly tune and respond to the 
> monitoring systems, you can allocate the manpower to open the holes 
> explicitly.
>
>> I keep getting invited to post-mortems on deployments which fail because 
>> border firewall rulesets weren't updated properly for the deployment and I 
>> keep on feeling like the doctor that tells the patient "well, if it hurts 
>> when you do that, stop doing that".
>
> in my experiance there is a strong tendancy to implement monitoring/alerting 
> systems with the justification that you can then open up the firewalls, but 
> then not allocate the manpower to properly respond to the alerts.
>
> there's also the problem that the firewall has a chance of preventing the 
> problem while the monitoring only tells you after the problem has occured.

The problem isn't manpower, the problem is co-ordination.  If you allow 
pre-production to talk to the internet unhindered to make development 
quick then you insure that you'll just have rollout failures because the 
firewalling information is never captured.  If you lock down 
pre-production then the firewall rule that needed to get opened up 3 
months ago when the software feature started development is forgotten by 
everyone (software dev, syseng, neteng, seceng) by the time the software 
is pushed to production.  It may work on a small scale with a couple 
different websites and someone who is detail oriented tracking all that 
information, but that approach doesn't scale.  Database-driven integration 
of the firewall ruleset with the software management/deployment 
infrastructure would solve this problem, but I've never seen a site 
anywhere nearly well run enough to implement this kind of holy grail.

You can also do a lot to mix the two approaches, actually.  Generally you 
can block nearly everything other than 80 or 433 outbound which catches 
IRC and lot of other nastiness.  For the biggest problem with port 80, 
which is an open proxy, you can scan for internal open proxies.  Block 
SMTP and force it through a relay and then monitor for anomalies.  The 
only major downside to this is that it doesn't block exploits which 
require being able to make a port 80 connection back outbound, but that 
is a subset of attacks, so you're still doing a lot of good but at the 
same time not blocking the business.  If you add anomaly detection on the 
border then you can actually detect problems and shut them down.

You are right that this is a prevent control rather than a detect control, 
which I pointed out, and that has some benefits in preventing attacks 
rather than just detecting them after-the-fact.  But at a site with no 
detect-control-resources then you simply have lots of prevent controls 
which prevent business with no ability to detect intrusions.  So if 
something does get past all your prevent controls, you wouldn't catch it 
at all.  If you have some detect resources, then I strongly suggest that 
the first place that should be done well is the border IDS and anomaly 
detection.
_______________________________________________
Discuss mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to