Comments from another perspective on the must/should question:

Best practice says to physically segregate networks by trust level and by 
impact of error or breach.

Somewhat self-evidently, this is to mitigate the impact of a) errors, and 
b) security breaches.  Of the two, errors (i.e. human errors) are by far 
the more common problem.

If you have a separate NIC for each network coming in to your firewall, 
the cables are well-identified, the ports are well-identified, and the 
other endpoint of those cables is also well-identified, it's much harder 
to accidentally expose high-trust traffic to a low-trust network. 
Specifically, it's far likelier that someone will notice that the cable 
they're holding has an "AT&T" tag on it but the port they're about to plug 
it into has a "PacBell" label over it.

When you use a switch and VLANs to segregate traffic, you have to worry 
about things like: in a pathological power situation (lightning strike, 
UPS blows up, whatever) if the switch is suddenly reset to factory 
defaults - and I've seen this happen - what will happen?  Every port gets 
reset to VLAN 1 with no filtering, and all your traffic is suddenly being 
propagated to every network segment.

Maybe you're thinking "big deal", but now consider the fairly-typical WAN 
situation where you're running routing protocols across WAN links, say 
RIPv2 without authentication (because you trust all the networks involved, 
right?  It's a point-to-point link, right?).  Your network topology 
suddenly collapses and takes [fixing or unplugging]+2hrs to reconverge.

Or the situation I once found: two smallish WAN providers both (stupidly) 
left STP turned on at the edge... when they were suddenly bridged together 
(by accident, I made a typo when setting up the VLANs) I managed to take 
down most of both providers' networks, and typical of STP both were down 
for <time to figure out what I did and fix it>+5 minutes.  Obviously I 
wasn't happy, and when we all figured out what had happened they weren't 
very happy with me, either.

As to security breaches, it is extremely difficult to a) know about the 
switch, b) target the switch, and c) hack the switch, but it's 
*infinitely* harder to hack a piece of Cat5 cable than a switch!

Having said all that, many of the firewall modules/blades you can buy for 
chassis-based routers and switches (Cisco 3600 ISR, Catalyst 10000, 
Juniper [something], etc.) require you to configure their ports entirely 
using VLANs anyway.

So it's hardly a universal "must", certainly not in the technical sense - 
it's a very, very strong "should" that you should only disregard if a) 
you're overconfident of your own abilities, b) you have no truly private 
data, c) you don't care too much about pissing off your WAN providers (or 
you know they won't even notice!), and d) you don't have enough space to 
mount one or two more switches in the server closet.

Note also that you might be tempted to use 802.1q-over-802.3ad 
(VLAN-over-LAG), which does work... but also generally speaking turns off 
a lot of the hardware acceleration your NIC can do for you.  Many NICs 
(certainly any half-decent one!) can still do IP offload with 802.1q (VLAN 
tagging), but I haven't run into any that can still do IP offload with 
802.3ad (link aggregation, aka "bonding", or "etherchannel").  Bundling 
links together (LAG) actually slowed my router down instead of speeding it 
up.

Another aspect is that if you're going to run your router in a blade 
chassis, say, (virtualized or not) you really won't have much choice but 
to use VLANs for everything - most blade chassis don't give you dedicated 
physical Ethernet ports, certainly not more than two on any I've seen. 
Most of 'em have an embedded NIC (or two, or four...) that plug straight 
into a backplane and are only exposed via a switch module.

(I am also noticing that pfSense 1.2.3 does not have good performance (for 
me, at least) forwarding traffic between "virtual switches" on a VMWare 
ESXi 4 host connected to the switch through a 4x V-in-LAG trunk.  I 
haven't had time to isolate the problem yet, although I observed slightly 
better performance when I let VMWare handle the VLAN tagging instead of 
pfSense (i.e. created 4 untagged virtual e1000 NICs instead of 1 tagged 
vnic).  Performance only seems affected if either ingress or egress 
traffic is local to the ESXi host, I see more-or-less normal performance 
if both src and dst are off-host.)

-Adam Thompson
 athom...@athompso.net




---------------------------------------------------------------------
To unsubscribe, e-mail: support-unsubscr...@pfsense.com
For additional commands, e-mail: support-h...@pfsense.com

Commercial support available - https://portal.pfsense.org

Reply via email to