> > > very few do content and protocol parsing, and even those
> > > are limited based on the designers' knowledge of attacks in the
> > > protocols that are being analysed and proxied.
> >
> >Actually, I am shocked! :/ When you say very few; Does that include the Proxies? 
> 
> Especially the proxies.

Too bad :( So the well known packet filter vs. proxie disscusion that takes place on 
this ML from time to time, about which one is the most secure is really quite 
hypotetical; From what I've seen this discusion seem to end up with concluding that a 
proxie, since it "understands the protocol is the most secure. It would really be more 
fair to say something like: A proxie is potentionaly more secure; if implemented 
correctly (i.e does check for known bugs, buffer overflows, etc.) 

> >What's the point with a proxie then? Has it become so that ppl. write proxies 
>simply as a means for certain traffic to travel across a                > > dualhomed 
>host with IP forwarding disabled (with no thought to security; no effort at blocking 
>buffer overflows, known bugs, etc. at all)
> 
> As far as I can tell, most proxies are not much better than just
> having an IP access list in a router. The first generation of
[...]

No, if proxies have become as you say a means of "just get the data back and forth" 
then maybe the oppsite is the case.

> Most of the current generation of proxies are written to "just
> get the data back and forth" and never mind doing security
> processing. For example, a "smart" web proxy would have to collect
> the whole document/data stream, look at it, and then decide
> whether or not to send it in/out. That breaks web streaming. The
> customers scream so the checks are removed. The firewall toolkit
> (and by extension early versions of Gauntlet) looked for about
> 4 well-known attacks against sendmail in the mail proxy, and
> FTP bouncing in FTP command streams. That was _it_.

IMHO, filtering out some attacks are better than none. I do not like the idea of the 
proxies basically being just a "traffic filter" :/  I really wish someone would make 
something like FWTK again; open source (for Linux & *BSD), developed by people that 
kept them selves updated on the latest exploits, and made sure their proxies blocked 
them (sure that would be too late for many, but it would help lot's more). And with 
the focus on security NOT performance.

Afther writing this I discovered that maybe SuSE will? I got this as part of a posting 
from the SuSE security ML:

"SuSE FTP Proxy - The first program of the SuSE Proxy Suite.
                 A secure FTP proxy with support for SSL, LDAP, command
                 restriction, active and passive FTP support, and much more.
                 RPM: fwproxy.rpm, fwproxys.rpm (SSL - not in the US version)"

Notice that they say "Proxy Suite", promising I think. Let's hope that they're doing a 
good job (i.e. make security tools, not just "traffic forwarders"). If anyone has any 
comments on SuSEs implementation, i'd love to hear them.

> Nowadays firewall makers are rewarded for hauling data back
> and forth at peak bandwidth, not for performing security checks.
> As a consequence, few of them do. I don't think it'd make any
> difference because nowadays the application protocols are not
> public information. Making checks in an FTP proxy was possible
> because FTP was a well-known protocol. What about netmeeting?
> Or ICQ? Or some other half-assed new application protocol thrown
> together last night by the startup down the street? The proxies
> just pass the data because nobody understands it anyhow and the
> vendors are free to change it from release to release.
 
It's sad that it's the firewalls makers, and who ever implements the security policy 
that has to take the heat from the users, and not the makers of the protocol :/ 
Protocols should either be open, or at the very least be given to FW developers 
signing NDA's. Ideally, these developers would consult someone who understands the 
security (FW developers?) in a protocol first.

> >Also in your debate on FW's (obsolete or not), you state: "Some firewalls perform 
>application specific security on data streams. -Others do > >not -Sometimes you 
>can't" What do you mean by the last one ("... you can't")? Why not? :)
> 
> SSL, for example. Even if you had a proxy that "understood"
> buffer overruns in HTTP, what about buffer overruns triggered
> over SSL? Mostly, inside the web server, the accesses wind up
> going down the same code-path once they've gotten pulled off
> the HTTP or SSL transports.

Ah, of'coz :)

> >It really seems like many computer security proffesionals don't understand the 
>incoming traffic problem either :/
> 
> Nope. :( So far we've been spared the next one, which is the
> "outgoing traffic problem" -- in which the bad guys realize
> that 99.7% of the firewalls out there are transparently
> permeable from the inside going toward the outside. Which means
> that a "firewall buster" trojan horse that knows how to tunnel
> out through a firewall (usually by just making a connection on
> port 80) will be able to easily make the firewall a moot issue.
> Imagine if someone wired a firewall buster into a virus like
> Melissa. How would network admins react? I know of no palatable
> solutions to this problem.

Scary! Though, I am suprised that we haven't seen this attack already. Well, we have; 
Just not the combo, right? Also, this is in fact closely related to the incoming 
traffic problem: The trojan/virus will have to get inside the FW.

Regards,

Per




-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]

Reply via email to