The 4500's with sup6+46xx modules and
higher suck less, going to a 24gb backplane (2:1
oversubscription), and I think with sup8 goes full line-rate
(finally). If you chew 8 ports to yourself on an asic boundary,
you'll be ok.
Most of the 4k sups shou
RE: bridging, that's just sort of how
they work, so they can control the mac domain locally and forward
out. Including dot1q trunking, and it starts making more sense.
Add in things like Openstack, VMware+NSX, anything that does layer
2 virtualization over vxlan
Another thing, depending on your nic,
using a bond may kill things like hardware offload functionality -
ran into that before too. At 1gbe it's probably a wash as it's
hard to tax a cpu at that, but do that at 10gbe without some
strategic irq pinning and you'll b
One thing I tell clients - if you need
more than a gig, get a 10gbe interface. That comes with it's own
challenges too, see if you can get it to use all of that 10gbe...
The issue you face with using a 802.3ad bond is the flow-hashing.
You're using a l2 p
I am working on a ProxmoxVE cluster i have set up.
I am needing a bit better network performance as i am also running CEPH for
the stoage layer
This is what i have for network configuration is the following. it seems to
be working. the nodes i have configured appear to be running with better
thro