Date: Sun, 21 Dec 2008 12:58:42 +0900
From: Adrian Chadd adr...@creative.net.au
On Sat, Dec 20, 2008, Ingo Flaschberger wrote:
I'm not shure if this setup would ever be stable.
also with ucarp tweaks.
hopefully freebsd supports soon more than 1 route.
FreeBSD, like all good open
* David Coulson:
Comparing Imagestream and Vyatta to Juniper is crazy. The first two
are software based platforms (with perhaps some hardware off-load for
checksums and whatnot), where as the Juniper pretty much just uses BSD
for control-plane features (BGP, for example, and controlling the
Once upon a time, David Coulson da...@davidcoulson.net said:
Comparing Imagestream and Vyatta to Juniper is crazy. The first two are
software based platforms (with perhaps some hardware off-load for
checksums and whatnot), where as the Juniper pretty much just uses BSD
for control-plane
Well, the J-series are fully software-based routers. Still, they have
their own routing daemons and such.
The difference is that Juniper, even on th J-series box, completely
separates the control plane and fowarding plane.
The forwarding plane on a M or T series is going to be a ppc based
On Sat, Dec 20, 2008, Ingo Flaschberger wrote:
I'm not shure if this setup would ever be stable.
also with ucarp tweaks.
hopefully freebsd supports soon more than 1 route.
FreeBSD, like all good open source projects, gets features supported when
people code them up.
So if you'd like to see
Dear Joe,
Several different traffic shaping strategies are available, and I think
all of them go far beyond simple.
ipfw 100 add pipe 1 all from 192.168.0.0/24 to any xmit vlan1
ipfw pipe 1 config bw 95Mbit/s queue 200Kbytes
thats simple.
Yes, but the point was that the feature was
Dear Joe,
Yes, but the point was that the feature was listed as simple traffic
shaping. You can do *complicated* traffic shaping too, which was the
reason I commented on that. Usually the ability to do complicated
traffic shaping means you can do simple traffic shaping too. ;-)
with linux?
Dear Joe,
Yes, but the point was that the feature was listed as simple traffic
shaping. You can do *complicated* traffic shaping too, which was the
reason I commented on that. Usually the ability to do complicated
traffic shaping means you can do simple traffic shaping too. ;-)
with
Dear Joe,
I did that experiment below. I didn't grab snapshots of the routing table
at the time, but I described the effect. Essentially, upon downing of the
interface, the local link via the vlan20 interface went away, and was
promptly replaced by the OSPF route (generally good/desirable).
Henry Yen wrote:
On Fri, Dec 19, 2008 at 18:32:40PM -0700, Michael Loftis wrote:
--On December 18, 2008 4:02:14 PM -0800 Bruce Robertson
br...@greatbasin.net wrote:
Imagestream does nice work as well.
I'll second the plug for imagestream as well.
Soucy, Ray wrote:
If all you're looking
So is Juniper a BSD base (if I recall correct). The difference is the
selection of hardware and added routing hardware.
juniper uses a freebsd base. but the stack, routing protocols,
forwarding, drivers, ... are quite different.
randy
I wasn't aware of imagestream using any custom (asic) hardware, except
the T1/3 cards in the concentrator we bought from them (worked like a
champ, btw).
-brandon
On 12/19/08, Martin List-Petersen mar...@airwire.ie wrote:
Henry Yen wrote:
On Fri, Dec 19, 2008 at 18:32:40PM -0700, Michael
Brandon Galbraith wrote:
I wasn't aware of imagestream using any custom (asic) hardware, except
the T1/3 cards in the concentrator we bought from them (worked like a
champ, btw).
It doesn't have to be hardware. Even their custom developed drivers and
software isn't available on anything but
It doesn't - It's just an x86 PC. I have Vyatta running inside VMware
ESX, not well, but it works ;-)
Comparing Imagestream and Vyatta to Juniper is crazy. The first two are
software based platforms (with perhaps some hardware off-load for
checksums and whatnot), where as the Juniper pretty
Thanks to the list again.
There's lots more options than I'd considered.
I think it's likely that I'll stick with what I know, which is Linux not
FreeBSD and Quagga. The lack of a need to learn new stuff is the my main
motivation behind this because I'm unlikely to break things as frequently.
Chris wrote:
Now to look at very affordable layer 2, Gigabit 3com switches with good pps.
You should take a look at HP. They have very good gigabit switches and
also offer lifetime guarantee on them.
HP actually has a CLI to configure the switch, not the crap 3Com has.
This might be of some use, it's a document written by one of the AMS-IX
engineers, it's a little aged (almost 2 years old) so there should be
some improvement in the numbers, but it might give you some insight in
the bottlenecks when pushing a Linux server to it's max (10Gigabit in
this case)
Dear Chris,
One final quick question on the NICs if I can. Following Mike's suggestion
about specific Intel chipsets (82575 or 82576) it looks like it's much
easier to source the chipsets mentioned by David (82571EB). If these NICs
are embedded on the motherboard is it going to be of
Ingo Flaschberger wrote:
OS:
Freebsd:
pros: very stable, quagge runs very well, fastforwarding support,
simple traffic shaping, interrupt less polling supported
cons: only 1 route for each network, vrrp failover is not easy to
implement with quagga and ospf, no multipath routing
Linux:
On Dec 18, 2008, at 4:13 AM, Jeroen Wunnink wrote:
This might be of some use, it's a document written by one of the AMS-
IX engineers, it's a little aged (almost 2 years old) so there
should be some improvement in the numbers, but it might give you
some insight in the bottlenecks when
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ingo Flaschberger wrote:
cons: only 1 route for each network, vrrp failover is not easy to
implement with quagga and ospf, no multipath routing
Anyone cares about VRRPD when you have Heartbeat?
Linux:
pros: more than 1 route for each network
* Alex Thurlow:
Depending on your WAN interface, there's actually a decent amount of
stuff out there. The cheaper alternative to me has actually always been
to get some old cisco hardware with the proper interfaces and use it for
media conversion. I have a 6500 with Sup1As in it. It can't
Eugeniu Patrascu wrote:
Chris wrote:
Now to look at very affordable layer 2, Gigabit 3com switches with
good pps.
You should take a look at HP. They have very good gigabit switches and
also offer lifetime guarantee on them.
HP actually has a CLI to configure the switch, not the crap
Ingo Flaschberger wrote:
Multipath, yes, but flow-based, not per packet.
There exists a patch for 2.4 kernel, but not for 2.6
Or tinker with iptables.
And last I checked, even with multiple 'nexthop' entries, it still
wasn't smart enough to drop a route if you lose an interface.
One final query for this thread if I may.
Our hardware provider has come back with this as an 'easy to source build'
in case we want two or three identical boxes:
Supermicro X7SBI-LN2 motherboard with
2 x Intel 82573V/L gigabit PCI-Express NICs
Does anyone have experience of these NICs before I
On Dec 18, 2008, at 4:00 AM, Eugeniu Patrascu wrote:
Chris wrote:
Now to look at very affordable layer 2, Gigabit 3com switches with
good pps.
You should take a look at HP. They have very good gigabit switches
and also offer lifetime guarantee on them.
HP actually has a CLI to
Not to defend 3Com or anything, but all of their enterprise stuff (for quite
a few years now) has an extremely similar CLI to IOS. Came out very shortly
after they got involved with Huawei.
If you're already familiar with 3com enterprise gear, check out the 4200G
series for cheap L2
I have posted thos off-list, for the list:
http://www.lannerinc.com/DM/FW-7550_DM.pdf
pros: cheap, cf-disk support, low power (~50W)
cf-disk support is pretty easy to add to lots of things. With the advent
of 4GB compact flash modules and CF-to-IDE adapters, it is not too hard
to avoid
Dear Joe,
Several different traffic shaping strategies are available, and I think
all of them go far beyond simple.
ipfw 100 add pipe 1 all from 192.168.0.0/24 to any xmit vlan1
ipfw pipe 1 config bw 95Mbit/s queue 200Kbytes
thats simple.
cons: only 1 route for each network, vrrp failover
We spent a good amount of time looking into deploying a home-grown
Linux-based CPE device over the summer.
Generally, Linux is not the issue with performance. You want to focus
on your hardware.
We've seen the best performance with Intel MT series PCI-X server NICs.
When we were testing the
Imagestream does nice work as well.
Soucy, Ray wrote:
If all you're looking for is basic routing though, it might be
worthwhile just getting a Vyatta appliance.
begin:vcard
fn:Bruce Robertson
n:Robertson;Bruce
org:Great Basin Internet Services, Inc
adr:;;241 Ridge St Ste
Chris ch...@ghostbusters.co.uk writes:
I'm hoping someone can offer some advice on suitable hardware and kernel
tweaks for using Linux as a router running bgpd via Quagga.
There was a talk Towards 10Gb/s open-source routing at this years
Linux-Kongress in Hamburg. Here are th slides:
Just as another source of info here, I'm running:
Dual Core Intel Xeon 3060 @ 2.4Ghz
2 Gb Ram (it says Mem: 2059280k total, 1258500k used, 800780k
free, 278004k buffers right now)
2 of these on the motherboard: Ethernet controller: Intel Corporation
82571EB Gigabit Ethernet Controller
Florian Weimer wrote:
* Eugeniu Patrascu:
My concern with PC routing (in the WAN area) is a lack of WAN NICs
with properly maintained kernel drivers.
Depending on your WAN interface, there's actually a decent amount of
stuff out there. The cheaper alternative to me has actually always
Florian Weimer wrote:
* Eugeniu Patrascu:
You can also use a kernel with LC-Trie as route hashing algorithm to
improve FIB lookups.
Do you know if it's possible to switch of the route cache? Based on
my past experience, it was a major source of routing performance
dependency on traffic
the recent facebook engineering post on scaling memcached to 200-300K
UDP requests/sec/node may be germaine here (in particular, patches to
make irq handling more intelligent become very useful at the traffic
levels being discussed).
On Wed, Dec 17, 2008, Chris wrote:
All the responses have been really helpful. Thanks to everyone for being
friendly and for taking the time to answer in detail.
I've asked a hardware provider to quote for a couple of x86 boxes and I'll
look for suitable Intel NICs too.
Jim: We're a very
All the responses have been really helpful. Thanks to everyone for being
friendly and for taking the time to answer in detail.
I've asked a hardware provider to quote for a couple of x86 boxes and I'll
look for suitable Intel NICs too.
Jim: We're a very small ISP and have a full mix of packet
* Eugeniu Patrascu:
You can also use a kernel with LC-Trie as route hashing algorithm to
improve FIB lookups.
Do you know if it's possible to switch of the route cache? Based on
my past experience, it was a major source of routing performance
dependency on traffic patterns (it's basically
You've given me lots to think about ! Thanks for all the input so far.
A few queries for the replies if I may. My brain is whirring.
Chris: You're right and I'm tempted. I've almost had my arm twisted to go
down the proprietory route as I have some Cisco experience but have become
pretty
The boxes (3650s) came with Broadcom BCM5708 on-board, but I push most
of my traffic over these:
1c:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
Controller (rev 06)
Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter
Flags: bus master, fast
Chris wrote:
Eugeniu: That's very useful. The Intel dual port NICs mentioned aren't any
good then I presume (please see my comment to David).
Actually it depends on the motherboard chipset. Some chipsets allocate
an interrupt per slot, and when you have lot's a traffic between two
ports on
I don't think you will have any troubles with industry standard hardware for
the rates you are quoting. When you get in excess of 300Mbps you have to
start worrying about PPS. When you are looking at 600Mbps then you
should pick out your system more carefully (tcpoe nics, pcie(X), cpu
at over
I've been pretty happy running IBM x-series hardware using RHEL4.
Usually it's PPS rather than throughput that will kill it, so if you're
doing 250Mbit of DNS/I-mix/HTTP, you'll probably have very different
results. There are some rx-ring tweaks for the NICs that are needed, but
on the most
chris == chris ch...@ghostbusters.co.uk writes:
chris All the responses have been really helpful. Thanks to everyone
chris for being friendly and for taking the time to answer in detail.
chris I've asked a hardware provider to quote for a couple of x86
chris boxes and I'll look for suitable
Ah, NO! Stay away from Click. It is NOT stable. Unless you want to hold
your network together with paperclips and rubber bands, stay away.
We use Linux software routing extensively where I work. We use Quagga
primarily. I tried XORP, and it was very interesting, but not
particularly ready for
46 matches
Mail list logo