Re: [c-nsp] MTU issue on a GRE tunnel
-Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of ML The two tunnel endpoints are ME3400s. I believe GRE Tunnels are not supported on ME3400. The packets hit the CPU which is not very fast on that platform. I know 100% for a fact this is the case on 3550s, and AFAIK it applies to all lower-end fix-configuration switches. Ryan Werber ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Cisco GSR Chokes on BGP
-Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Michael K. Smith Hello Dominic: It looks like you only have 256 Mb of Packet Ram. You should probably upgrade to 512 MB. I run multiple full peers on 3 port GE cards with no trouble, but they are all set to 512/512. I can confirm that 256/256 works fine with 3 BGP Full-peers + public peering for us. The ONLY difference between our setups are that we are running GRP-B/IOS gsr-k4p-mz.120-32.SY10.bin - Which is the final one for the GRP (Time to upgrade!) FRU: Linecard/Module: 3GE-GBIC-SC= Processor Memory: MEM-GRP/LC-256= Packet Memory: MEM-LC1-PKT-256= #execute-on slot 2 show proc mem | i Free = Line Card (Slot 2) = Processor Pool Total: 189351488 Used: 119945424 Free: 69406064 One other thing I would look at is if you are running ANY sort of IPv6 on that card. Engine 2 cards puke at any sort of ipv6 throughput - I got that from this mailing list a few months or so ago.. Hope this helps.. Ryan Werber Epik Networks ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] RFC-1483 on Cisco 12000
-Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Forrest W. Christian I have a new-to-me Cisco 12008 which I am working on swapping in as a replacement for a 7206VXR which was moving way too much traffic GRP,, Engine 1 Giga-E, Engine 0 Quad ATM OC3. Please be aware that ATM card is not longer supported and will not work in a more recent IOS. It will also disable dcef on a full-BGP table, which disables the card. Things have gone really well, and I'm quite happy so far... But ended up with one surprise. Didn't realize the 12008 won't bridge a ATM PVC to a VLAN (or any ethernet for that matter)... Basically we have a few things which we extend an 802.1q vlan across a point to point ATM circuit to a far end for. I have a couple of ideas on how to get around it, but would really prefer that the 12008 do the work. Doesn't look like an option for me - but figured I'd ask if anyone knows of something I missed The other ends of the ATM circuit are various old and new ciscos, some with MPLS, and some not, although it doesn't look like the 12008 will do EoMPLS either with the engines I have. Probably will look at some other options using a tunnel or similar. With what you have, (I believe) you are out of luck. Local switching for 'any-to-any' is only supported on the ISE engines (3+) - all of which are still very expensive. Option #1 (which I know works) enable mpls end to end, and acquire an 3GE-GBIC-SC - which is the 3 port. these can be found for under 1000$ on eBay. The reason is the 1GE-GBIC is an engine1 which cannot do Edge mpls functionality. the 3GE-GBIC is an engine2 which can. Then, you do mpls l2transport and a xconnect. Documentation on setting this up is easily found on ciscos site. Option #2 To do anything with l2tp I believe you need a tunneling card, which requires an engine2 (backbone) card. POS-OC48 is most likely going to be the cheapest at like 500$ on eBay. I have not done this myself, nor recommend it. it wastes a slot. Option #3 do L2TP to the recently retired 7200 and do the encapsulation over IP. This I believe is your only 0$ cost option. I would check to see if all of your devices will do L2tp. Evidence of mpls edge support #show diags 2 | i 3GE FRU: Linecard/Module: 3GE-GBIC-SC= #show mpls l2transport hw-capability interface gi2/1 snip Transport type Eth VLAN Core functionality: MPLS label disposition supported Distributed processing supported Control word processing supported Sequence number processing not supported Edge functionality: MPLS label imposition supported Distributed processing supported Control word processing supported Sequence number processing not supported /SNIP #show diags 0 | i GE FRU: Linecard/Module: GE-GBIC-SC-B= #show mpls l2transport hw-capability interface gi0/0 snip Transport type Eth VLAN Core functionality: MPLS label disposition supported Distributed processing supported Control word processing supported Sequence number processing not supported Edge functionality: Not supported /snip Hope this helps Ryan Werber Sr. Network Engineer Epik Networks ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 12k Full BGP Feed Memory Requirements
-Original Message- From: Antonio Soares [mailto:amsoa...@netcabo.pt] Sent: Friday, June 05, 2009 4:14 AM Wow, this is unbelievable ! Can you show us your show proc mem | inc BGP ? Do you really have two full BGP feeds (about 284k prefixes each) ? #show proc memory | i BGP 169 0 2895956668 1123582500 310165452 0 0 BGP Router 172 03975400 1008225208 6840 53464 0 BGP I/O 173 0 4188 1220 14028 0 0 BGP Scanner First one is Cogent (174), the Second one is Tiscali (3257). There are 4 Ibgp Route-Servers as well. we have ~10 full transit feeds throughout our asn, as well as a ton of peering. The only thing changed below are ip addresses to protect the innocent. We currently have ~130 meg free on the GRP-B. We also have 1 directly connected eBGP IPv6 peer, and 5 throughout our ASN. 38.103.xx.xx 4 174 3895305 60405 2215518900 5w6d 283503 77.67.xx.xx4 3257 5813157 139266 2215518900 6w6d 282571 PEER-RS-1 4 21513 2472535 3813308 2215518900 15:25:46 100863 RS-1 4 21513 4092583 3613405 2215518900 6w6d 265775 RS-2 4 21513 3244549 3613398 2215518900 6w6d 267897 RS-3 4 21513 5660680 3711962 2215518900 1w1d 284664 show ip cef summary IP Distributed CEF with switching (Table Version 8565971), flags=0x0 288375 routes, 0 reresolve, 0 unresolved (0 old, 0 new), peak 18273 8561775 instant recursive resolutions, 0 used background process 12 load sharing elements, 12 references 1389 in-place/0 aborted modifications 57883336 bytes allocated to the FIB table data structures universal per-destination load sharing algorithm, id 6CE54348 2(0) CEF resets Resolution Timer: Exponential (currently 1s, peak 4s) Tree summary: 8-8-8-8 stride pattern short mask protection disabled 288375 leaves, 14605 nodes using 23265244 bytes Transient memory used: 149355436, max: 149395476 Table epoch: 0 (288375 entries at this epoch) Adjacency Table has 41 adjacencies 34 IPv4 adjacencies 7 IPv6 adjacencies ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 12k Full BGP Feed Memory Requirements
-Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Antonio Soares I need help in order to calculate the memory needed to accomodate 2 or more Full BGP Feeds. This is for a 12400 running IOS. Today i saw this problem with some linecards: OUR GE-GBIC-SC-B's w/ 256MB Generally have about 100 megs of ram free with 2 directly connected full feeds, and at least 6 through ibgp. There may be a configuration issue. Only recently have our Engine-0 Cards been running out of memory, as they only have 128MB. bbr1.tor#execute-on slot 3 show proc mem | i Free = Line Card (Slot 3) = Total: 223634112, Used: 88582896, Free: 135051216 We have 12008's with GRP-B's w/ 512 RP Ram. Hope this helps! Ryan Werber Epik Networks ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Channelized DS3 over SM fiber handoff
Allstream at 151 front street in Toronto does this. They run a single strand SMF and they terminate it into a form of a media converter, which passes off 2x BNC as expected for a DS3. They do this for both clear channel and channelized DS3. Interestingly enough, our channelized OC12s come in on a pair of SMF from them. I would imagine you would need a similar media converter - I'm sorry I don't have the model number of the equipment Allstream uses. All I know it is some sort of WDM equipment (obviously) on the fiber side. Ryan Werber Sr. Network Engineer Epik Networks AS21513 -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Seth Mattinen Sent: Friday, May 01, 2009 6:42 PM To: cisco-nsp@puck.nether.net Subject: Re: [c-nsp] Channelized DS3 over SM fiber handoff Troy Beisigl wrote: Maybe they delivered a channelized OC3? I know that is an actual product, but have never seen a DS3 as fiber handoff. Maybe; odd though if one asked for a DS3. If that's the case you can just get an OC3 port adapter. ~Seth ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] suddenly lost telnet connection in switch
The default is to deny. You would have to put a permit tcp any any in first to change that behavior. -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of chloe K Sent: Friday, December 12, 2008 11:05 AM To: cisco-nsp@puck.nether.net Subject: [c-nsp] suddenly lost telnet connection in switch Hi I am doing the following access-list for www to restrict to switch http access but when I apply it in the interface, i suddenly lost telnet connection. Why? Extended IP access list 110 permit tcp 192.168.0.0 0.255.255.255 any eq www permit tcp 172.16.0.0 0.255.255.255 any eq www permit tcp 10.0.0.0 0.255.255.255 any eq www deny tcp any eq www any deny tcp any eq www any log switch(config)#interface VLAN1 switch(config-if)#ip access-group 110 in switch(config-if)# ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/