On Wed, 7 Jun 2000, Alberto Begliomini wrote:

> The topic of gigabit firewalls has been discussed ad nauseam on this 
> mailing list; I still wonder, however, what kind of firewall large 
> sites like Yahoo, or Ebay, or E-Trade, just ot name a few, are using.

Most large "sites" are actually a collection of servers at multiple sites,
routing scale points dictated this architecture initially, and downtime
scale points probably make it still a good architecture today.  If you
channel all your traffic to a single gigabit interface in a single server
and it goes down, you're out of business- that's unacceptable to most
large sites, so colocation to points with servers off of 100Mb/s
FastEthernet is the norm for such sites.

Also, most large sites have customer bases that span at least the US, if
not the world and colocating servers at facilities topologically close to
major peering points in different geographic regions provides
significantly better response to customers in that region or topologically
connected to long haul lines going into that facility.  Decreased
per-system load by traffic distribution also increases response to all
customers globally.

> My guess is probably they just use access lists on their border
> routers and they harden they servers. I don't think that a firewall
> which can sustain 1 Gbs traffic exists yet. Comments?

(A) They don't need to sustain that much traffic, as they most likely
don't aggragate that much traffic to a single server or even group of
servers (I'm personally unaware of anyone who's doing that, but it's
possible that a large hosting provider like IBM could be, since they
probably own their entire infrastructure- unlike most large sites that
colo at facilities where standard non-GigEequipment is the norm. for the
hosting facility itself, though that will change over time.) IP stack
performance problems at servers for serving to multitudes of customers
come waaaay before link saturation.  If you start to put too many machines
on a segment, you start running into problems with ARP traffic affecting
your performance as well (I've personally seen BigIP have troubles with a
questionably-planned architecture that had >10,000 IP addresses on a
single segment- though adding a second BigIP at least made the network
usable.)

(B) Filtering at the border and hardening at the host is appropriate for
Web servers which must be hit by the public anyway.  It requires a great
deal of dicipline and highly clueful administration staff to do, and is a
completely different environment than most "corporate" networks.

Very few large sites I know put even packet filtering firewalls in front
of their BigIP boxes.  Most that I'm familiar with firewall their database
servers off of private networks "behind" their Web servers, or over
leased-lines from colos. where the DB architecture isn't or can't be
scaled to one or two sites.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
[EMAIL PROTECTED]      which may have no basis whatsoever in fact."

-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]

Reply via email to