Re: [c-nsp] 7600 Owners, failure stats wanted

2012-01-23 Thread Pete Templin

On 1/21/12 8:28 AM, James Bensley wrote:


Even if you've never had a failure I'd still like to know, thats just as
important.


I should also mention that at my previous job, we had an event one 
fine December afternoon.  Three 6509s all fried simultaneously: 3x 
chassis, 6x sup, 4x linecards.  DC power, and the boys were working on 
DC power when it happened.  Interestingly, each 6509 had a 7507 
underneath it, fed by the same fuse panel as the respective 6509; the 
7507 emerged unscathed.


To this day, the facilities crew won't accept that it was a power event, 
even when I point out the phantom -28V that appears on the distribution 
panel when the inbound and outbound circuit breakers are all off.  :)


pt


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] ASR as BGP Route Server

2012-01-23 Thread Michael Lambert
Hi All,

I was wondering if anyone could comment on experiences (production or 
otherwise) with BGP route server functionality on the ASR1000 series/IOS-XE.  
Can you offer any comparisons (stability, configuration, table sizes, etc) 
between it and the open-source implementations (quagga, BIRD, OpenBGPD)?

Thanks,

Michael


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] in praise of the cat6500 Re: Flow tools

2012-01-23 Thread Jeff Bacon
 Date: Fri, 20 Jan 2012 20:00:56 +
 From: Alessandra Forti alessandra.fo...@cern.ch
 
 Hi,
 
 I got some money to upgrade my network infrastructure from 1Gbps to 10Gbps.
 
 At the moment I have a cat6509E with a Sup720. This system has been
 working fine for 6 years. The upgrade will have to last a similar number
 of years and our main requirement is throughput with minimal routing if
 we are going to double the link to the outside world. My initial
 combination to support 16 racks at 10Gbps was to simply buy
 4x6716-10T-3C blades and keep the Sup720. I then got enough money to
 upgrade the Sup720 to a Sup2T with (6816-10T-2T blades). I was wondering
 if this is really necessary or if the Sup720 will last that long i.e.
 another 6 years. I'm not an expert and would appreciate your comments if
 I go down this route because the alternative is to replace the 6509
 altogether (most likely with a Force10 Z9000).
 

OK. So, I'm a little late here, and it's not normally what I
get into, but.

What strikes me here is throughput with minimal routing. What
is the 6500 actually _doing_? Is it doing primarily layer-2 with
some VLAN SVIs and light layer-3 with some routing protocol?

If that's the case, well, you could easily use a bunch of 6704s as
one poster suggested to get a bunch of cheap line-rate ports, or 
use a 6716 and oversubscribe... 

But if it were me? I'd toss the 6500 entirely and get an Arista 7050.
If you need more ports, then use a 40G aggregator and fan-out on
7050s. Or insert some other vendor that's doing 10G on commodity
silicon here. 

(I'd also suggest ditching that idea of 10G-T and just go twinax. you
can reach 7 meters or even more, and it's more reliable and draws
way less power than 10G-T - not to mention error rates. The cables
will cost you a bit more but overall it's worth it. Of course
it depends on what you're using for TOR.)


The Cat6k got its start as an L2 device. It was that until some
bright boy decided to gut a 7200 NPE and glue it into the supervisor
and create the MSFC. But we've come a long way since then. The
Cat6500 at this point functionally resembles a very high throughput
mid-range-capability switching router that happens to also be able
to play a dumb L2/L3 switch when necessary. 

As an L2 or basic L3 switch, it's out-matched and out-classed. (IMO,
so is the entire cat4k line, anymore, except in certain situations.) 
As a 10G L3 switch, it's massively out-classed. If that's all you want,
buy something else. Your per-port cost for 10G is so high on a cat6500
that it's just ridiculous. 

The cat6k has graduated to being a high-touch device. MPLS, ATOM,
QoS, Netflow (yes RD it's got flaws but it still HAS Netflow),
complex configurations - got it. No, it's not an ASR9k. But it
can do a fair bit of what an ASR9k can do for way less and with
latency/mpps rates that an ASR9k could only dream of. It's in the
middle somewhere. 

If you're not going to use any of those features, there's plenty
of better cheaper alternatives than a cat6500. And it doesn't
sound like the OP intends to. 

Myself - yes, I have a mesh of 6500s. Which, for any site with
any density of hosts, immediately drops down into an Arista L3
dist/fan-out layer, because 10G ports are way cheaper on an Arista.
The 6500s own the 10G MAN links and split the traffic out into
its various layers, and really it breaks down to a single 
vs720/X6708 or sup2t/6908, and all the traffic lives on the plane
of the 6x08 with maybe a 67xx-SFP feeding in some 1G traffic. 

Just another $0.02 in the pot.

-bacon







___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ASR as BGP Route Server

2012-01-23 Thread Piotr Wojciechowski
On 1/23/12 18:45 , Michael Lambert wrote:
 Hi All,
 
 I was wondering if anyone could comment on experiences (production or 
 otherwise) with BGP route server functionality on the ASR1000 series/IOS-XE.  
 Can you offer any comparisons (stability, configuration, table sizes, etc) 
 between it and the open-source implementations (quagga, BIRD, OpenBGPD)?
 

Hi Michael,

ASR1000 is good choice when you plan to deploy RR in your networks. High
performance is one of key factors when you looking for right platform.
It's also scalable and provides flexible CoPP mechanisms, hardware and
software redundancy. Not to mention good price to performance ratio
especially if you are planning to deploy ASR1001 which performance can
be upgraded using license while your network is growing. Table size
depends on how much DRAM you put in it. Check datasheets for details
about how many prefixes you can store.

All other solutions are just software running on Linux or Unix systems
so they are dependent on hardware platform performance and stability.
Not really good option imo.

Regards,

-- 
Piotr Wojciechowski  (CCIE #25543)  | The trouble with being a god is
http://ccieplayground.wordpress.com | that you've got no one to pray to
JID: pe...@jabber.org   |   -- (Terry Pratchett, Small Gods)


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Syslog Patterns

2012-01-23 Thread Livio Zanol Puppim
My script... (sorry for the portuguese language)

You need to execute the command file prompt quiet at configure terminal
before running the script.

It send the running-configuration to a server (can be TFTP, FTP, SCP,
etc...) every time a user enters the configure terminal mode and exit. It
works with IOS, NX-OS and IOS XR.

the format of the file is:
hostname_-mm-dd-hh-mm-ss_user.log
example: RT066_2012-01-23-10-13-13_tinka.log

Of course it needs some tuning like nice configuration, priority, etc...
It doesn't check if something has changed because every OS version has
different commands (thank you business units)

Enabling the script (I have copied to the router using dynamips to test):

event manager directory user policy nvram:/
event manager policy script.tcl type user

#THE SCRIPT!
::cisco::eem::event_register_syslog pattern .*CONFIG_I.*
namespace import ::cisco::eem::*
namespace import ::cisco::lib::*
set servidor 192.168.1.1

#Pega nome do ativo
set nome_ativo [info hostname]

#Pega hora do evento
set data [clock format [clock seconds] -format %Y-%m-%d-%H-%M-%S]

#Pega a linha que gerou o evento (linha de log quando alguem sai de
'configure terminal')
array set arr_einfo [event_reqinfo]
set config_changes $arr_einfo(msg)

#Regexp que da match pegando usuario que mudou configuracao
set result [regexp {^.*by\s(.*)\s.*} $config_changes tudo user]
if {$result == 0} {
set result [regexp {^.*by\s(.*)} $config_changes tudo user]
}

#Coloca nome do ativo antes de tudo
set nome_arquivo $nome_ativo

#Adiciona data e usuario ao nome do arquivo
append nome_arquivo _ $data _ $user .log

#comeca processo para salvar arquivo no local desejado
if [catch {cli_open} result] {
error $result $errorInfo
} else {
array set cli1 $result
}
if [catch {cli_exec $cli1(fd) enable} result] {
error $result $errorInfo
}
#=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
#Alterar linha abaixo caso mude forma de envio
#
set comando copy running-config tftp://$servidor/$nome_arquivo;
#puts $comando
#
#Alterar linha acima caso mude forma de envio
#=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
if [catch {cli_exec $cli1(fd) $comando} result] {
error $result $errorInfo
}
# Close open cli before exit.
#if [catch {cli_close $cli1(fd) $cli1(tty_id)} result] {
# error $result $errorInfo
#}
puts Running-config salva no servidor $servidor



2012/1/18 Peter Rathlev pe...@rathlev.dk

 On Wed, 2012-01-18 at 17:10 +0100, Martin Komoň wrote:
  There is a feature configured with parser config cache interface,
  that caches interface configuration. On a Cat6k5 w/ Sup720 it reduces
  time to generate running config from 7 to ~1 sec (YMMV).
 
  Beware of the bug CSCtd93384!

 Nice, that does help. :-) Just tested on one Sup720 and the sh run
 went from ~20 seconds to ~10 seconds. Too bad with CSCtd93384 but at
 least it's supposed to be fixed in SXI4.

 --
 Peter


 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/




-- 
[]'s

Lívio Zanol Puppim
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Central services VPNs

2012-01-23 Thread Livio Zanol Puppim
Can't this be done using routing policies?

Just a guess...

2011/12/18 MKS rekordmeis...@gmail.com

 So I have a MPLS vpn question for the masterminds on this list;)

 I have two central services VRFs, A and B and I need route leaking
 (same import/export) between them to optimize traffic flow. The reason
 I need two VRFs is that I have to specifiy a different default gw for
 each VRF.
 But the problem is that this setup eats up tcam space in the 6500 we
 use, and doesn't scale then adding the third or forth VRF, then the
 vrfs contain 10k routes.

 Can this be done in a scaleale way (tcam) but still be able to
 optimize traffic flow and support different default GWs

 Regards
 MKS
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/




-- 
[]'s

Lívio Zanol Puppim
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] cat4500 Interface utilization by 6500 NAM module

2012-01-23 Thread Muhammad Asif Rao
Hi folks,

Its regarding 6500 NAM module. I have cat4500 swtiches  need to monitor
interface bandwidth utilization over NAM installed 6500 chassis. How can I
monitor traffic utilization over NAM like we do in PRTG/MRTG/Cacti?



Regards,
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/