Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Nikola Kolev
Hello Nick,

On 18.05.2013, at 18:39, Nick Khamis sym...@gmail.com wrote:

 Hello Everyone,
 
 We are running:
 
 Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
 Controller (rev 06)
 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
 Controller (rev 03)
 
 2 bgp links from different providers using quagga, iptables etc
 
 We are transmitting an average of 700Mbps with packet sizes upwards of
 900-1000 bytes when the traffic graph begins to flatten. We also start
 experiencing some crashes at that point, and not have been able to
 pinpoint that either.
 
 I was hoping to get some feedback on what else we can strip from the
 kernel. If you have a similar setup for a stable platform the .config
 would be great!
 
 Also, what are your thoughts on migrating to OpenBSD and bgpd, not
 sure if there would be a performance increase, but the security would
 be even more stronger?
 
 Kind Regards,
 
 Nick

You might be maxing out your server's PCI bus throughput, so it might be a 
better idea if you can get Ethernet NICs that are sitting at least on PCIe x8 
slots.

Leaving that aside, I take it you've configured some sort of CPU/PCI affinity?

As for migration to another OS, I find FreeBSD better as a matter of network 
performance. The last time I checked OpenBSD was either lacking or was in the 
early stages of multiple cores support.


Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread William Herrin
On Sat, May 18, 2013 at 11:39 AM, Nick Khamis sym...@gmail.com wrote:
 We are transmitting an average of 700Mbps with packet sizes upwards of
 900-1000 bytes when the traffic graph begins to flatten. We also start
 experiencing some crashes at that point, and not have been able to
 pinpoint that either.

Hi Nick,

You're done. You can buy more recent server hardware and get another
small bump. You may be able to tweak interrupt rates from the NICs as
well, trading latency for throughput. But basically you're done:
you've hit the upper bound of what slow-path (not hardware assisted)
networking can currently do.

Options:

1. Buy equipment with a hardware fast path, such as the higher end
Juniper and Cisco routers.

2. Split the load. Run multiple BGP routers and filter some portion of
the /8's on each of them. On your IGP, advertise /8's instead of a
default.

Regards,
Bill Herrin



-- 
William D. Herrin  her...@dirtside.com  b...@herrin.us
3005 Crane Dr. .. Web: http://bill.herrin.us/
Falls Church, VA 22042-3004



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Jon Lewis

On Sun, 19 May 2013, William Herrin wrote:


On Sat, May 18, 2013 at 11:39 AM, Nick Khamis sym...@gmail.com wrote:

We are transmitting an average of 700Mbps with packet sizes upwards of
900-1000 bytes when the traffic graph begins to flatten. We also start
experiencing some crashes at that point, and not have been able to
pinpoint that either.


Hi Nick,

You're done. You can buy more recent server hardware and get another
small bump. You may be able to tweak interrupt rates from the NICs as
well, trading latency for throughput. But basically you're done:
you've hit the upper bound of what slow-path (not hardware assisted)
networking can currently do.

Options:

1. Buy equipment with a hardware fast path, such as the higher end
Juniper and Cisco routers.


I think you've misinterpreted his numbers.  He's using 1gb ethernet 
interfaces, so that's 700 mbit/s.  He didn't mention if he'd done any IP 
stack tuning, or what sort of crashes he's having...but people have been 
doing higher bandwith than this on Linux for years.


--
 Jon Lewis, MCP :)   |  I route
 |  therefore you are
_ http://www.lewis.org/~jlewis/pgp for PGP public key_



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Andre Tomt

On 18. mai 2013 17:39, Nick Khamis wrote:

Hello Everyone,

We are running:

Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
Controller (rev 06)
Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
Controller (rev 03)

2 bgp links from different providers using quagga, iptables etc

We are transmitting an average of 700Mbps with packet sizes upwards of
900-1000 bytes when the traffic graph begins to flatten. We also start
experiencing some crashes at that point, and not have been able to
pinpoint that either.

I was hoping to get some feedback on what else we can strip from the
kernel. If you have a similar setup for a stable platform the .config
would be great!

Also, what are your thoughts on migrating to OpenBSD and bgpd, not
sure if there would be a performance increase, but the security would
be even more stronger?


This is some fairly ancient hardware, so what you can get out if it will 
be limited. Though gige should not be impossible.


The usual tricks are to make sure netfilter is not loaded, especially 
the conntrack/nat based parts as that will inspect every flow for state 
information. Either make sure those parts are compiled out or the 
modules/code never loads.


If you have any iptables/netfilter rules, make sure they are 1) 
stateless 2) properly organized (cant just throw everything into FORWARD 
and expect it to be performant).


You could try setting IRQ affinity so both ports run on the same core, 
however I'm not sure if that will help much as its still the same cache 
and distance to memory. On modern NICS you can do tricks like tie rx of 
port 1 with tx of port 2. Probably not on that generation though.


The 82571EB and 82573E is, while old, PCIe hardware, there should not be 
any PCI bottlenecks, even with you having to bounce off that stone age 
FSB that old CPU has. Not sure well that generation intel NIC silicon 
does linerate easily though.


But really you should get some newerish hardware with on-cpu PCIe and 
memory controllers (and preferably QPI). That architectural jump really 
upped the networking throughput of commodity hardware, probably by 
orders of magnitude (people were doing 40Gbps routing using standard 
Linux 5 years ago).


Curious about vmstat output during saturation, and kernel version too. 
IPv4 routing changed significantly recently and IPv6 routing performance 
also improved somewhat.





Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Nick Khamis
On 5/18/13, Michael McConnell mich...@winkstreaming.com wrote:
 Hello Nick,

 Your email is pretty generic, the likelihood of anyone being able to provide
 any actual help or advice is pretty low. I suggest you check out Vyatta.org,
 its an Open Source router solution that uses Quagga for its underlying BGP
 management, and if you desire you can purpose a support package a few grand
 a year.

 Cheers,
 Mike

 --

 Michael McConnell
 WINK Streaming;
 email: mich...@winkstreaming.com
 phone: +1 312 281-5433 x 7400
 cell: +506 8706-2389
 skype: wink-michael
 web: http://winkstreaming.com

 On May 18, 2013, at 9:39 AM, Nick Khamis sym...@gmail.com wrote:

 Hello Everyone,

 We are running:

 Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
 Controller (rev 06)
 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
 Controller (rev 03)

 2 bgp links from different providers using quagga, iptables etc

 We are transmitting an average of 700Mbps with packet sizes upwards of
 900-1000 bytes when the traffic graph begins to flatten. We also start
 experiencing some crashes at that point, and not have been able to
 pinpoint that either.

 I was hoping to get some feedback on what else we can strip from the
 kernel. If you have a similar setup for a stable platform the .config
 would be great!

 Also, what are your thoughts on migrating to OpenBSD and bgpd, not
 sure if there would be a performance increase, but the security would
 be even more stronger?

 Kind Regards,

 Nick





Hello Michael,

I totally understand how my question is generic in nature. I will
defiantly take a look at Vyatta, and weigh the effort vs. benefit
topic. The purpose of my email is to see how people with similar
setups managed to get more out of their system using kernel tweaks or
further stripping on their OS. In our case, we are using Gentoo.

Nick.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Nick Khamis
On 5/19/13, Nikola Kolev ni...@mnet.bg wrote:
 You might be maxing out your server's PCI bus throughput, so it might be a
 better idea if you can get Ethernet NICs that are sitting at least on PCIe
 x8 slots.



Nikola, thank you so much for your response! It kind of looks that
way, and we do have another candidate machine that has a PCIe 3 x8.
First thing, I never liked riser card and the candidate IBM x3250 M$
does use them. Not sure how much of a hit I will take for that.
Secondly are there any proven intel 4 port cards in PCIe 3 preferably
pro 1000.


 Leaving that aside, I take it you've configured some sort of CPU/PCI
 affinity?

For interrupts we disabled CONFIG_HOTPLUG_CPU in the kernel, and
assigned interrupts to the less used core using APIC. I am not sure if
there is anything more we can do?

 As for migration to another OS, I find FreeBSD better as a matter of network
 performance. The last time I checked OpenBSD was either lacking or was in
 the early stages of multiple cores support.

I know I mentioned migration, but gentoo has been really good to us,
and we grew really fond of her :). Hope I can tune it further before
retiring it as our OS of choice.

Nick.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Zachary Giles
I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few small
BGP connections for a few year. They were running CentOS 5 + Quagga with a
bunch of stuff turned off. Worked extremely well. We also had really small
traffic back then.

Server hardware has become amazingly fast under-the-covers these days. It
certainly still can't match an ASIC designed solution from Cisco etc, but
it should be able to push several GB of traffic.
In HPC storage applications, for example, we have multiple servers with
Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's not
network, but it does demonstrate pushing data into daemon applications and
back down to the kernel at high rates.
Certainly a kernel routing table with no iptables and a small Quagga daemon
in the background can push similar.

In other words, get new hardware and design it flow.






On Sun, May 19, 2013 at 10:58 AM, Nick Khamis sym...@gmail.com wrote:

 On 5/18/13, Michael McConnell mich...@winkstreaming.com wrote:
  Hello Nick,
 
  Your email is pretty generic, the likelihood of anyone being able to
 provide
  any actual help or advice is pretty low. I suggest you check out
 Vyatta.org,
  its an Open Source router solution that uses Quagga for its underlying
 BGP
  management, and if you desire you can purpose a support package a few
 grand
  a year.
 
  Cheers,
  Mike
 
  --
 
  Michael McConnell
  WINK Streaming;
  email: mich...@winkstreaming.com
  phone: +1 312 281-5433 x 7400
  cell: +506 8706-2389
  skype: wink-michael
  web: http://winkstreaming.com
 
  On May 18, 2013, at 9:39 AM, Nick Khamis sym...@gmail.com wrote:
 
  Hello Everyone,
 
  We are running:
 
  Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
  Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
  Controller (rev 06)
  Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
  Controller (rev 03)
 
  2 bgp links from different providers using quagga, iptables etc
 
  We are transmitting an average of 700Mbps with packet sizes upwards of
  900-1000 bytes when the traffic graph begins to flatten. We also start
  experiencing some crashes at that point, and not have been able to
  pinpoint that either.
 
  I was hoping to get some feedback on what else we can strip from the
  kernel. If you have a similar setup for a stable platform the .config
  would be great!
 
  Also, what are your thoughts on migrating to OpenBSD and bgpd, not
  sure if there would be a performance increase, but the security would
  be even more stronger?
 
  Kind Regards,
 
  Nick
 
 
 


 Hello Michael,

 I totally understand how my question is generic in nature. I will
 defiantly take a look at Vyatta, and weigh the effort vs. benefit
 topic. The purpose of my email is to see how people with similar
 setups managed to get more out of their system using kernel tweaks or
 further stripping on their OS. In our case, we are using Gentoo.

 Nick.




-- 
Zach Giles
zgi...@gmail.com


Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Nick Khamis
 Hi Nick,

 You're done. You can buy more recent server hardware and get another
 small bump. You may be able to tweak interrupt rates from the NICs as
 well, trading latency for throughput. But basically you're done:
 you've hit the upper bound of what slow-path (not hardware assisted)
 networking can currently do.

 Options:

 1. Buy equipment with a hardware fast path, such as the higher end
 Juniper and Cisco routers.

 2. Split the load. Run multiple BGP routers and filter some portion of
 the /8's on each of them. On your IGP, advertise /8's instead of a
 default.

 Regards,
 Bill Herrin


Hey Bill, thanks for your reply Yeah option 1.. I think we
will do whatever it takes to avoid that route. I don't have a good
reason for it, it's just preference. Great manufactures/produts
etc..., we just like the flexibility we get with how things are setup
right now. Not to mention extra rack space! Option 2 is exactly what
we are looking at. But before that, we are looking at upgrading to a
PCIe 3 x8 or x16 as mentioned earlier for that small bump. If we hit
25% increase in throughout then that would keep the barracudas  in
suits at bay. But for now, they are really breathing down my back...
:)


N.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Nick Khamis
 This is some fairly ancient hardware, so what you can get out if it will
 be limited. Though gige should not be impossible.


Agreed!!!

 The usual tricks are to make sure netfilter is not loaded, especially
 the conntrack/nat based parts as that will inspect every flow for state
 information. Either make sure those parts are compiled out or the
 modules/code never loads.

 If you have any iptables/netfilter rules, make sure they are 1)
 stateless 2) properly organized (cant just throw everything into FORWARD
 and expect it to be performant).


We do use a statefull iptables on our router, some forward rules...
This is known to be on of our issues, not sure if having a separate
iptables box would be the best and only solution for this?


 You could try setting IRQ affinity so both ports run on the same core,
 however I'm not sure if that will help much as its still the same cache
 and distance to memory. On modern NICS you can do tricks like tie rx of
 port 1 with tx of port 2. Probably not on that generation though.

Those figures include IRQ affinity tweaks at the kernel and APIC level.


 The 82571EB and 82573E is, while old, PCIe hardware, there should not be
 any PCI bottlenecks, even with you having to bounce off that stone age
 FSB that old CPU has. Not sure well that generation intel NIC silicon
 does linerate easily though.

 But really you should get some newerish hardware with on-cpu PCIe and
 memory controllers (and preferably QPI). That architectural jump really
 upped the networking throughput of commodity hardware, probably by
 orders of magnitude (people were doing 40Gbps routing using standard
 Linux 5 years ago).

Any ideas of the setup??? Maybe as far as naming some chipset, interface?
And xserver that is the best candidate. Will google.. :)

 Curious about vmstat output during saturation, and kernel version too.
 IPv4 routing changed significantly recently and IPv6 routing performance
 also improved somewhat.



Will get that output during peak on monday for you guys. Newest kernel
3.6 or 7...


Thank you so much for your insight,

Nick.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Michael McConnell
Hello Nick,

Your email is pretty generic, the likelihood of anyone being able to provide 
any actual help or advice is pretty low. I suggest you check out Vyatta.org, 
its an Open Source router solution that uses Quagga for its underlying BGP 
management, and if you desire you can purpose a support package a few grand a 
year.

Cheers,
Mike

--

Michael McConnell
WINK Streaming;
email: mich...@winkstreaming.com
phone: +1 312 281-5433 x 7400
cell: +506 8706-2389
skype: wink-michael
web: http://winkstreaming.com

On May 18, 2013, at 9:39 AM, Nick Khamis sym...@gmail.com wrote:

 Hello Everyone,
 
 We are running:
 
 Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
 Controller (rev 06)
 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
 Controller (rev 03)
 
 2 bgp links from different providers using quagga, iptables etc
 
 We are transmitting an average of 700Mbps with packet sizes upwards of
 900-1000 bytes when the traffic graph begins to flatten. We also start
 experiencing some crashes at that point, and not have been able to
 pinpoint that either.
 
 I was hoping to get some feedback on what else we can strip from the
 kernel. If you have a similar setup for a stable platform the .config
 would be great!
 
 Also, what are your thoughts on migrating to OpenBSD and bgpd, not
 sure if there would be a performance increase, but the security would
 be even more stronger?
 
 Kind Regards,
 
 Nick
 



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Nick Khamis
On 5/19/13, Zachary Giles zgi...@gmail.com wrote:
 I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few small
 BGP connections for a few year. They were running CentOS 5 + Quagga with a
 bunch of stuff turned off. Worked extremely well. We also had really small
 traffic back then.

 Server hardware has become amazingly fast under-the-covers these days. It
 certainly still can't match an ASIC designed solution from Cisco etc, but
 it should be able to push several GB of traffic.
 In HPC storage applications, for example, we have multiple servers with
 Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's not
 network, but it does demonstrate pushing data into daemon applications and
 back down to the kernel at high rates.
 Certainly a kernel routing table with no iptables and a small Quagga daemon
 in the background can push similar.

 In other words, get new hardware and design it flow.

What we are having a hard time with right now is finding that
perfect setup without going the whitebox route. For example the
x3250 M4 has one pci-e gen 3 x8 full length (great!), and one gen 2
x4 (Not so good...). The ideal in our case would be a newish xserver
with two full length gen 3 x8 or even x16 in a nice 1u for factor
humming along and being able to handle up to 64 GT/s of traffic,
firewall and NAT rules included.

Hope this is not considered noise to an old problem however, any help
is greatly appreciated, and will keep everyone posted on the final
numbers post upgrade.

N.



Re: Inventory and workflow management systems

2013-05-19 Thread vijay gill
Resurrecting this thread. Anyone?
What software solution do people use for inventory management for things
like riser/conduit drawdown, fiber inventory, physical topology store,
CLR/DLR, x-connect, contracts, port inventory, etc.
Any experiences in integrating workflow into those packages for work
orders, modeling, drawdown levels, etc.



On Fri, Apr 4, 2008 at 2:16 PM, vijay gill vg...@vijaygill.com wrote:

 What software solution do people use for inventory management for things
 like riser/conduit drawdown, fiber inventory, physical topology store,
 CLR/DLR, x-connect, contracts, port inventory, etc.
 Any experiences in integrating workflow into those packages for work
 orders, modeling, drawdown levels, etc.

 /vijay






Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Phil Fagan
Not noise!
On May 19, 2013 10:20 AM, Nick Khamis sym...@gmail.com wrote:

 On 5/19/13, Zachary Giles zgi...@gmail.com wrote:
  I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few
 small
  BGP connections for a few year. They were running CentOS 5 + Quagga with
 a
  bunch of stuff turned off. Worked extremely well. We also had really
 small
  traffic back then.
 
  Server hardware has become amazingly fast under-the-covers these days. It
  certainly still can't match an ASIC designed solution from Cisco etc, but
  it should be able to push several GB of traffic.
  In HPC storage applications, for example, we have multiple servers with
  Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's
 not
  network, but it does demonstrate pushing data into daemon applications
 and
  back down to the kernel at high rates.
  Certainly a kernel routing table with no iptables and a small Quagga
 daemon
  in the background can push similar.
 
  In other words, get new hardware and design it flow.

 What we are having a hard time with right now is finding that
 perfect setup without going the whitebox route. For example the
 x3250 M4 has one pci-e gen 3 x8 full length (great!), and one gen 2
 x4 (Not so good...). The ideal in our case would be a newish xserver
 with two full length gen 3 x8 or even x16 in a nice 1u for factor
 humming along and being able to handle up to 64 GT/s of traffic,
 firewall and NAT rules included.

 Hope this is not considered noise to an old problem however, any help
 is greatly appreciated, and will keep everyone posted on the final
 numbers post upgrade.

 N.




Re: ISIS and OSPF together

2013-05-19 Thread vijay gill
Randy is correct. In most cases, the two protocols are running co-incident
for a while so you can do your table validation and topology mapping and
then you turn off OSPF. For vendors that aren't capable of supporting ISIS,
this is a feature and not a bug.



On Sun, May 12, 2013 at 1:57 AM, Randy Bush ra...@psg.com wrote:

  One scenario that i can think of when somebody might run the 2 protocols
  ISIS and OSPF together for a brief period is when the admin is migrating
  from one IGP to the other. This, i understand never happens in steady
  state. The only time this can happen is if an AS gets merged into another
  AS (due to mergers and acquisitions) and the two ASes happen to run ISIS
  and OSPF respectively. In such instances, there is a brief period when
 two
  protocols might run together before one gets turned off and there is only
  one left.

 no.  some ops come to see the light and move their network from ospf to
 is-is.  see vijay gill's nanog preso
 http://nanog.org/meetings/nanog29/presentations/aol-backbone.ram




Re: ISIS and OSPF together

2013-05-19 Thread Brandon Butterworth
 Randy is correct

But who'd follow his advice, he regularly encourages his competitors
to do stupid things.

brandon



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Andre Tomt

(oops, I keep forgetting to send with my nanog identity..)

On 19. mai 2013 17:48, Nick Khamis wrote:

We do use a statefull iptables on our router, some forward rules...
This is known to be on of our issues, not sure if having a separate
iptables box would be the best and only solution for this?


Ah, statefullness/conntrack .. once you load it you kinda lost already.. 
Sorry. Any gains from other tunables will likely be dwarfed by the cpu 
cycles spent by the kernel to track all connections. The more diverse 
the traffic the more it will hurt. Connection tracking is just 
inherently non-scalable (and fragile - by the way.)


However, the cheapest and simplest is probably just to throw more modern 
hardware at it. A Xeon E3 (or two for redudancy ;)) is quite cheap..


The long term, scalable solution is a deeper network like you hinted at, 
with statefullness - if really needed at all - pushed as close to your 
edge and as far away from your border as possible. But.. More boxes, 
more to manage, more power, more stuff that can fail, more redudancies 
needed.. adds up.


Then again if you are close to gig actual traffic already, you might 
want to at least think about future scalability..


snip

Any ideas of the setup??? Maybe as far as naming some chipset, interface?
And xserver that is the best candidate. Will google.. :)


The big shift to integrated (and fast) I/O happened around 2008 IIRC, 
anything introduced after that is usually quite efficient at moving 
packets around, at least if Intel based. Even desktop i3/i5/i7 platforms 
can do 10gig as long as you make sure you put the network chips/cards on 
the cpu pcie controllers lanes. With anything new its hard to go wrong.


xserver?? xserve? That is quite old..


Curious about vmstat output during saturation, and kernel version too.
IPv4 routing changed significantly recently and IPv6 routing performance
also improved somewhat.


Will get that output during peak on monday for you guys. Newest kernel
3.6 or 7...


Good. That is at least fairly recent and has most of the more modern 
networking stuff (and better defaults)





Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Matt Palmer
On Sun, May 19, 2013 at 11:48:17AM -0400, Nick Khamis wrote:
 We do use a statefull iptables on our router, some forward rules...
 This is known to be on of our issues, not sure if having a separate
 iptables box would be the best and only solution for this?

I don't know about only, but it'd have to come close to best.  iptables
(and stateful firewalling in general) is a pretty significant CPU and memory
sink.  Definitely get rid of any stateful rules, preferably *all* the rules,
and apply them at a separate location.  We've always had BGP routing
separated from firewalling, but we're currently migrating from
one-giant-core-firewall to lots-of-little-firewalls because our firewalls
are starting to cry a little.  Nice thing is that horizontally scaling
firewalls is easy -- just whack 'em on each subnet instead of running
everything together.  Core routing is a little harder to scale out
(although as has been described already, by no means impossible).  The
important thing is to remove *anything* from your core routing boxes that
doesn't *absolutely* have to be there -- and stateful firewall rules are
*extremely* high on that list.

- Matt

-- 
When the revolution comes, they won't be able to FIND the wall.
-- Brian Kantor, in the Monastery




Re: Looking for Netflow analysis package

2013-05-19 Thread Cameron Daniel

On 2013-05-17 8:11 pm, Tim Vollebregt wrote:

Is anyone using an open source solution to process netflow v9 captures?
I'm waiting for SiLK v3 for some time now, which is currently only
available for TLA's and Universities.

Currently looking into nfdump.


To drag this back on topic, yes I'm currently using nfcap/nfdump to 
capture and parse Netflow v9. It's not as tidy as I'd like but it does 
the job.


If you want something you can just point and shoot, nfsen ties those two 
tools together into one config file.



Tim





Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Ben
On Sat, May 18, 2013 at 11:39:55AM -0400, Nick Khamis wrote:
 Hello Everyone,
 
 We are running:
 
 Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
 Controller (rev 06)
 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
 Controller (rev 03)
 
 2 bgp links from different providers using quagga, iptables etc
 
 We are transmitting an average of 700Mbps with packet sizes upwards of
 900-1000 bytes when the traffic graph begins to flatten. We also start
 experiencing some crashes at that point, and not have been able to
 pinpoint that either.
 
 I was hoping to get some feedback on what else we can strip from the
 kernel. If you have a similar setup for a stable platform the .config
 would be great!
 
 Also, what are your thoughts on migrating to OpenBSD and bgpd, not
 sure if there would be a performance increase, but the security would
 be even more stronger?

That hardware should be fine to do two gig ports upstream, with another
two to go to your network?

I'd check with vmstat 1 to see what your interrupt rate is like, if it's
above 40k/sec I'd check coalescing settings.

I also prefer OpenBSD/OpenBGP myself.  It's a simpler configuration, with less
things to fix.

With Linux you have to disable reverse path filtering, screw around with 
iptables
to do bypass on stateful filtering.  Then Quagga itself can be buggy. (my 
original
reason for shifting away from Linux was that Quagga didn't fix enough of Zebra's
bugs.. although that was many years ago, things may have improved a little by 
then,
but ime significantly buggy software tends to stay buggy even with fixing)

With regards to security of OpenBSD versus Linux, you shouldn't be exposing any
services to the world with either.  And it's more stability/configuration that 
would
push me to OpenBSD rather than performance.

And with regards to crashing I'd try and figure out what was happening there 
quickly
before making radical changes.  Is it running out of memory, is Quagga dying?  
Is
there a default route that works when Quagga crashes?  One issue I had was I 
found
Quagga crashing leaving a whole lot of routes lingering in the table, and I had 
a
script that'd go through and purge them.

I'm also a bit confused about your dual upstreams with two ethernet interfaces 
total,
are they both sharing one pipe, or are there some Broadcom or such ethernet 
interfaces
too.  I've found Broadcom chipsets can be a bit problematic, and the only 
stability
issue I've ever had with OpenBSD is a Broadcom interface wedging for minutes 
under DDOS
attack, which was gigabit'ish speed DDOS with older hardware than you.

oh, to check coalescing settings under linux use: ethtool -c eth0; ethtool -c 
eth1

Ben.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Ben
On Sun, May 19, 2013 at 11:48:17AM -0400, Nick Khamis wrote:
 We do use a statefull iptables on our router, some forward rules...
 This is known to be on of our issues, not sure if having a separate
 iptables box would be the best and only solution for this?
 
Do you actually need stateful filtering?  A lot of people seem to think
that it's important, when really they're accomplishing little from it,
you can block ports etc without it.  And the idea of protecting hosts
from strange traffic is only really significant if the hosts have very
outdated TCP/IP stacks etc.  And it breaks things like having multiple
routers.

There's an obscure NOTRACK rule you can use to cut down the number of
state entries, or remote state tracking for hosts that don't need it.

http://serverfault.com/questions/234560/how-to-turn-iptables-stateless

although googling for NOTRACK should find other things too.

Ben.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Ben
On Sun, May 19, 2013 at 11:48:17AM -0400, Nick Khamis wrote:
  But really you should get some newerish hardware with on-cpu PCIe and
  memory controllers (and preferably QPI). That architectural jump really
  upped the networking throughput of commodity hardware, probably by
  orders of magnitude (people were doing 40Gbps routing using standard
  Linux 5 years ago).
 Any ideas of the setup??? Maybe as far as naming some chipset, interface?
 And xserver that is the best candidate. Will google.. :)

Base model e5 CPU is generally considered adequate, and has direct link between
cache and PCI bypassing memory.

http://www.intel.com/content/www/us/en/io/data-direct-i-o-faq.html

Motherboard is likely to have i350 chipset for ethernet.  

http://www.intel.com/content/www/us/en/ethernet-controllers/ethernet-i350-server-adapter-brief.html

Ben.



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Seth Mattinen
On 5/19/13 4:27 PM, Ben wrote:
 Do you actually need stateful filtering?  A lot of people seem to think
 that it's important, when really they're accomplishing little from it,
 you can block ports etc without it.


I believe PCI compliance requires it, other things like it probably do too.

~Seth



What hath god wrought?

2013-05-19 Thread Michael Painter

http://arstechnica.com/security/2013/05/ddos-for-hire-service-works-with-blessing-of-fbi-operator-says/



Re: What hath god wrought?

2013-05-19 Thread Joshua Goldbard
Like the comment below the article says, that line about turning off recursive 
DNS is pretty lame. Tantamount to saying if you don't want me coming in your 
house you shouldn't have used wooden doors n00b!. It's still breaking and 
entering.

Call me crazy but I tend to think every service has a Backdoor these days. It's 
not surprising to see one for a Ddos service.

In other news, the sky is still blue.

Thanks for sharing the article though! Was a fun read.

Cheers,
Joshua

Sent from my iPhone

On May 19, 2013, at 4:59 PM, Michael Painter tvhaw...@shaka.com wrote:

 http://arstechnica.com/security/2013/05/ddos-for-hire-service-works-with-blessing-of-fbi-operator-says/
 



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Valdis . Kletnieks
On Sun, 19 May 2013 16:42:23 -0700, Seth Mattinen said:
 On 5/19/13 4:27 PM, Ben wrote:
  Do you actually need stateful filtering?  A lot of people seem to think
  that it's important, when really they're accomplishing little from it,
  you can block ports etc without it.


 I believe PCI compliance requires it, other things like it probably do too.

It's the rare ISP who's border routers are in-scope for PCI compliance.


pgpYJSi19_kxP.pgp
Description: PGP signature


Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread William Herrin
On Sun, May 19, 2013 at 11:34 AM, Nick Khamis sym...@gmail.com wrote:
 Hey Bill, thanks for your reply Yeah option 1.. I think we
 will do whatever it takes to avoid that route. I don't have a good
 reason for it, it's just preference. Option 2 is exactly what
 we are looking at.

Hi Nick,

You might get enough of a bump from something like an HP DL380p gen8
to saturate your gig-e. I wouldn't bank on stably going any higher
than that. And as someone else mentioned, definitely lose conntrack
and stateful firewalling. If you need 'em, move 'em to interior boxes
that aren't dealing with your main Internet pipe.

If you're up for a challenge there are specialty NIC cards like the
Endace DAG. They're usually used for packet capture but in principle
they have the right kind of hardware fast path (e.g. TCAMs) built in
to accomplish what you want to do.

Heck of a challenge though. I haven't heard of anybody putting
together a white-box fast path router before.

Regards,
Bill Herrin


-- 
William D. Herrin  her...@dirtside.com  b...@herrin.us
3005 Crane Dr. .. Web: http://bill.herrin.us/
Falls Church, VA 22042-3004



Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Andre Tomt

Minor nitpicking I know..

On 20. mai 2013 01:23, Ben wrote:

With Linux you have to disable reverse path filtering, screw around with 
iptables
to do bypass on stateful filtering.


You dont have to screw around with iptables. The kernel wont load the 
conntrack modules/code unless you actually try to load stateful 
rulesets*. rp filtering on by default I'd also argue is the better 
default setting, for the 99% of other usecases :-P


With quagga I would tend to agree - but as you I have not used it ages 
and things do change for the better over time -- occasionally.


* you CAN configure your kernel to always load it, but that is silly.





Re: High throughput bgp links using gentoo + stipped kernel

2013-05-19 Thread Andrew Jones

As for migration to another OS, I find FreeBSD better as a matter of
network performance. The last time I checked OpenBSD was either
lacking or was in the early stages of multiple cores support.


If you do decide to go the FreeBSD route (you can run openbgpd on 
FreeBSD if you like), check out the POLLING option on ethernet NICs, it 
cuts down on the number of interrupts and can increase performance, 
particularly when dealing with smaller packets.





Re: Inventory and workflow management systems

2013-05-19 Thread Christopher Morrow
On Sun, May 19, 2013 at 1:21 PM, vijay gill vg...@vijaygill.com wrote:
 Resurrecting this thread. Anyone?
 What software solution do people use for inventory management for things
 like riser/conduit drawdown, fiber inventory, physical topology store,
 CLR/DLR, x-connect, contracts, port inventory, etc.
 Any experiences in integrating workflow into those packages for work
 orders, modeling, drawdown levels, etc.

isn't it odd/lame that in many cases the answer to this is 'build your own' ?