Re: [c-nsp] Cisco IIH padding
IOS-XE mtu values are consistent with IOS. (IOS-XR is different to IOS/XE with regards to the different L2/L3 values). Brad -Original Message- From: cisco-nsp [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of vinny_abe...@dell.com Sent: Wednesday, 26 November 2014 04:58 To: nsp.li...@gmail.com; cisco-nsp@puck.nether.net Subject: Re: [c-nsp] Cisco IIH padding Dell - Internal Use - Confidential The first five IS-IS hellos are still padded to the full MTU size and subsequent hellos are not once the adjacency is formed... this is to detect MTU mismatches. If you cannot fix the MTU mismatch for some reason, you can also work around it by setting the clns mtu size to match on both sides which should also work. I would recommend setting the physical MTU of the interface to what is supported on the link, though. I'm not familiar with IOS-XE, but I know IOS-XR and IOS need to have their MTU's adjusted accordingly, which means they won't be the same numeric value. Someone more familiar with IOS-XE may know if this is also an issue or not. -Vinny -Original Message- From: cisco-nsp [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Alex K. Sent: Tuesday, November 25, 2014 12:52 AM To: cisco-nsp@puck.nether.net Subject: [c-nsp] Cisco IIH padding Hello everybody, Although I have “no hello padding” configured, the adjacency won't come up until I limit the CLNS MTU on some link in my network (there is an MTU issue on that link, it's not 1500). As far as I remember, Cisco IOS implementation of IS-IS will *still send out first* IIHs padded, never mind I had “no hello padding” *configured*. On the other hand, it seems like that isn't documented. Can anybody kindly point out for me (and probably, for the rest of the list) the correct documentation for that and if this is still relevant for modern IOS/IOS-XE versions? Thank you. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Effect of simultaneous TCP sessions on bandwidth
Apply a shaper (not a policer) towards the service provider at each end of 95Mbps or so (will probably require tweaking). Single TCP session is probably managing to balance itself into the ~100Mbps circuit. Two/many TCP sessions are probably bursting into a policer (and effectively each other) often enough to ruin performance. Brad On 10/11/2013 4:12 PM, Youssef Bengelloun-Zahr wrote: Hello community, Need your help and hands on experience to shed some light on some problem I'm facing. We have contracted a Layer 2 ethernet connection hand-off between our DC (Frankfurt) and a customer site (Hamburg) with a carrier. Carrier provides us with an ethernet MPLS pipe up to a DC in hamburg and relies on a third party local loop provider to extend it up to customer site. Nothing new under the sun here. We have been testing this connection because we think we are facing bandwidth issues. Let me summarize our results : - Carrier claims E2E Ethernet RFC2544 passed : we have been to check the results and they seem OK, - UDP traffic reaches up to 95 Mbits/s for one way streams (both ways) and simaltaneous bi-directionnal streams, - TCP traffic reaches up to 90 Mbits/s for one way streams (both ways), - TCP traffic hits some kind of limit and isn't able to achieve more than 40-60 Mbits/s in average === That's the problem we are facing. One bit of information I think is relevant : - FRA Handoff between our provider and our PE is using a GigE port, - HBG Handoff between our provider and local-loop provider is using a Fast ethernet ports between their facing equipments, - CE in Hamburg is a Fast Ethernet port and is forced with 100 Full duplex, We have carried tests with multiple devices directly connected behind our PE in FRA and carrier's CE in HBG, results are always the same. In the end, we connected servers directly in order to suppress any uneeded equipments on the path, tests we're carried using iPerf and some other tools. We have been debugging this, no improvement. We have tried everything, disabling all policiers, etc nothing nails it ! Our provider claims this is normal behavior for TCP. Does this sound normal to you ? Thanks for your help. Best regards. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ -- Brad Gould, Network Engineer iiNet / Internode P: +61 8 8228 2999 brad...@internode.com.au ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Question on ASR1000 over-subscription
ASR1000 will drop traffic if you exceed the ESP's capacity. The router wont complain about multiple 10G SPA's, or rejected them (so *not* like a 7200 with bandwidth points) Brad -Original Message- From: cisco-nsp [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of CiscoNSP List Sent: Friday, 6 September 2013 12:05 PM To: cisco-nsp@puck.nether.net Subject: [c-nsp] Question on ASR1000 over-subscription Hi, If you have RP1/ESP10/SIP10 in an ASR, and you install multiple 10G SPA's will the ASR complain(Reject the SPA's)? Or will it accept them, and if you exceed aggregate bandwidth of the ESP, you will simply see dropped traffic? Cheers. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ASR performance
As a p.s. to this post - does anyone know if the ASR has ISG on the roadmap? I've found zero mention of ISG with regards to the ASR (which does limit its use in DSL aggregation). Brad MKS wrote: Hi list I was wondering if somebody has had the chance to play with the new ASR? From the introduction of ESP it's suppose to terminate 8000 subscribers on ESP5 and 16000 on ESP10, (32000 on ESP20)? Has somebody had the chance to actually test PPPoE termination performance on this box? e.g. number_of_subscribers vs. throughput vs. load ? Thanks in advance MKS http://www.cisco.com/en/US/prod/collateral/routers/ps9343/qa_c67-449980.html Q. Where are the 5- and 10-Gbps ESPs positioned in a service provider's broadband network? A. The Cisco ASR 1000 Series Router serves as a broadband aggregation router that terminates 8,000 to 16,000 subscriber sessions; supports features such as Cisco Session Border Controller (SBC) for voice over IP (VoIP), video Telepresence services, and hardware-assisted Firewall for security; and requires Gigabit Ethernet or 10 Gigabit Ethernet uplink capability. The Cisco ASR 1000 Series Router is ideally suited for deployment as a Point-to-Point Termination and Aggregation (PTA) device, L2TP Access Concentrator (LAC), or L2TP Network Server (LNS). ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ -- Brad Gould, Network Engineer Internode Level 5, 150 Grenfell Street, Adelaide 5000 P: 08 8228 2999 F: 08 8235 6999 [EMAIL PROTECTED]; http://www.internode.on.net/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Ethernet debugging...MTU?
Are you pinging with DF set? Most PA-FE's I've seen only support an MTU of 1500 bytes. Each device tends to also have a maximum packet size it can handle - even after fragmentation. I've seen some cheap CPE not respond to pings larger than about 8k. Brad Andy Dills wrote: Ok, I have a weird situation that I'd like to get some input on. Cliff notes: Is the inability to get ping replies with datagrams of larger than 9216 bytes across a 100mbps ethernet circuit an indication that the far end is setup with an MTU consistent with jumbo frames? What to do if the far end swears it's set for 1500? Background: We're in the process of turning up a new fast-e to a company located in Equinix. We are not located in Equinix, but Level3 is, and we have Level3 fiber in our datacenter. There are two major segments of this circuit: the long haul to Ashburn and the cross connect in Equinix. The run at Equinix is too long for copper, and Level3 for some reason insisted upon a copper handoff, so Equinix supplied and installed transceivers to enable a fiber x-con that is delivered via copper to the cage. In our datacenter, the ethernet circuit is connected directly via a short copper run from Level3's space to a standard FE port on a Cisco router, that has previously been in use and is known to be working. If it matters, for now it's on a PA-2FEISL-TX (but will later be moved to a more current PA when put into production...to my knowledge, even though that PA isn't ideal, it should still work fine at lower bandwidth levels and with proper full-duplex, etc. The previously attached customer had no problems pushing 30mbps, for example). When testing the circuit, using the cisco ping utility, with datagrams of 9216 bytes or less, we have no packet loss. When datagrams larger than 9216 bytes are used, we have 100% packet loss. Given the packet size when total failure occurs, my first reaction is that the other company has somehow misconfigured their switch to use jumbo frames, as if the circuit was a gig-e. According to the company at the far end, their device doesn't even support jumbo frames, and other people are attached and working fine. They seem to be quite certain the problem isn't on their end. I have a hard time not believing them; checking the MTU is a pretty cut and dry thing, and I'm working with senior level people at the other company, who I'm assuming (perhaps an incorrect assumption) run and manage their international network. To narrow down the problem, we have had Level3 setup a laptop in their cage at Equinix, attached to the interface facing our datacenter. When testing to that, I had no packet loss with packets of any size up to the maximum of 18024 bytes. To me this eliminates the long-haul portion from consideration. We've also had Equinix double check (at the supervisor level) that the transceivers are 10/100, hardcoded for 100 full-duplex (as is everything else end to end). I also have a hard time believing that Equinix would have any difficulties installing the correct model and properly configured ethernet transceivers; Ashburn is a top notch facility with good people. (I was hoping with my fingers crossed they had accidently installed gig-e transceivers, or that Level3 had accidentally ordered gig-e transceivers...no luck). As this has been a lingering issue for some time, I'm currently pushing for technical reps from all of the companies involved to meet up at Equinix, sit in a room, and figure it out. This is a bit difficult for the other company as they don't have any real local staffing. So, I'm hoping to come up with a solution that doesn't involve them getting on an airplane, but it's starting to look like our only avenue of resolution. That said, has anybody encountered this before, or has any theories about ways I can debug this short of having the other company (who doesn't have local staff) visit their cage and attach a laptop facing us, to localize the issue to either their switch or the Equinix cross connect? Am I likely correct in my theory that something is configured for the jumbo frame MTU and thus response packets aren't being properly fragmented? Thanks for any insight. Andy --- Andy Dills Xecunet, Inc. www.xecu.net 301-682-9972 --- ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ -- Brad Gould, Network Engineer Internode PO Box 284, Rundle Mall 5000 Level 3, 132 Grenfell Street, Adelaide 5000 P: 08 8228 2999 F: 08 8235 6999 [EMAIL PROTECTED]; http://www.internode.on.net/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco
[c-nsp] NSE-150 issues
Hi, Anyone else running 7304 NSE-150's? Having issues? Please contact me off list and we can swap notes. Thanks Brad -- Brad Gould, Network Engineer Internode PO Box 284, Rundle Mall 5000 Level 3, 132 Grenfell Street, Adelaide 5000 P: 08 8228 2999 F: 08 8235 6999 [EMAIL PROTECTED]; http://www.internode.on.net/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] display last lines of logging
How about rtr#sh log last x where x=minutes? Brad Rodney Dunn wrote: Good suggestion. Let me see if I can convince development to code it. We have some of those things with the event log infrastructure already. UUT_#sh monitor event-trace ssm ? all Show all the traces in current buffer backShow trace from this far back in the past clock Show trace from a specific clock time/date from-boot Show trace from this many seconds after booting latest Show latest trace events since last display parameters Paramters of the trace On Thu, May 24, 2007 at 05:26:01PM +0300, Tassos Chatzithomaoglou wrote: I was wondering Is there a way to display the x last lines of the log of a router (through the cli) ? Like the CatOS sh logging buffer -x is doing. -- Tassos ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ -- Brad Gould, Network Engineer Internode PO Box 284, Rundle Mall 5000 Level 3, 132 Grenfell Street, Adelaide 5000 P: 08 8228 2999 F: 08 8235 6999 [EMAIL PROTECTED]; http://www.internode.on.net/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/