Re: [c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread Tim Stevenson

Hi Mack,

An example is shown in figure 16 here:
http://www.cisco.com/application/pdf/en/us/guest/netsol/ns431/c649/ccmigration_09186a008093b876.pdf

Hope that helps,
Tim

At 03:18 PM 10/3/2010, Mack O'Brian remarked:
Thanks Tim for explaining the terminologies. That was really 
beneficial. I have a question on your comments under 
polarization In a 'multi-tier' network, using the same hash 
input on each tier results in traffic after the 1st tier polarizing 
to a subset What is 'multi-tier' network? Can you 
shed some light or point to a URL etc. Thanks again.


Mack


On Sun, Oct 3, 2010 at 10:08 AM, Tim Stevenson 
<tstev...@cisco.com> wrote:

Hi John,
Let's get everyone agreed on the terminology first, then we can try 
to solve the problem.


* ECMP = Equal cost multipath, it is most typically a term used 
around unicast routing where for a given IP prefix you have multiple 
equal cost next hops and you load share traffic among them based on 
a hash (or less commonly per packet). The hash can be based on 
several criteria, ie, IPs & L4 ports in various combinations.


* CEF = it's interchangeable with 'ECMP' - CEF-based load sharing & 
ECMP mean the same thing.


* Multicast multipath = Uses a hash to select the RPF interface for 
a particular multicast flow when there are ECMP paths back to the 
source/RP. There are options to determine the input values (G, S,G, 
S,G+NH). This feature is not on by default in IOS. If it is not 
enabled, then IOS will choose ONE of the ECMP paths as the RPF 
(highest neighbor IP) and ALL multicast will be pulled over that link.


* Polarization = In a 'multi-tier' network, using the same hash 
input on each tier results in traffic after the 1st tier polarizing 
to a subset of the available links. It's accomodated for by adding a 
unique ID at each hop to the hash input for unicast; for multicast 
multipath, by including the next hop IP as hash input. Whether this 
really comes into play depends on the depth of the network routing topology.


Ok - so given all of the above, with ECMP routing between the 7600s 
& the 4948s, and with multicast multipath already enabled on the 
7600 and using S,G basic hashing: if the traffic flow is from the 
4k->7600, the only option you have to improve things is to use S,G + 
next-hop. I'm not entirely convinced it will have a major impact, it 
depends on whether you have multiple levels of routing, one which is 
getting RPF hash selection pretty evenly but then at this layer, 
polarization is occurring since only a subset of traffic is reaching 
it and the hash input is the same (so only a subset of links is 
being selected as RPF). Based on your description I can't tell if 
that's a possibility in your setup.


Regardless of all that, changing CEF/ECMP hash input on the 4948 
will not have any significant impact, since that wouldn't affect 
multicast traffic at all, any particular S,G will still have only 
ONE of those four interfaces as an OIF, and that is driven by where 
the PIM join came in from the 7600, which in turn is driven by 
whether mcast multipath is enabled, and what hash is used to select 
the RPF interface.


Also, clearly, changing CEF/ECMP hash input on the 7600 would have 
not any impact since you're worried about traffic flowing the other 
direction anyway.


Hope that helps,
Tim

At 09:09 AM 10/3/2010, John Neiberger remarked:

I'm starting another thread because the topic is migrating. To
simplify, we have a 7600 with SUP720-3BXL connected via four routed
links to a 4948. The bulk of the traffic on this network is multicast
traffic flowing from the 4948 to the 7600 (and onward from there). In
order to get the best load sharing over those four links, what is the
recommended CEF tuning and ECMP configuration?

I ask because we seem to be running into ECMP polarization and/or CEF
polarization. We have already decided that we need to be using
s-g-hash next-hop-based for ECMP. We're using s-g-hash basic right
now. But what about CEF? Do we need to tune CEF along with tuning ECMP
for this to work properly? We want the most even distribution of
traffic possible. As it is right now, we're seeing serious unequal
load sharing. In some cases all of the traffic is going over one link
and not even using the other three.

Do any of you know the recommended CEF parameters in a situation like this?

Thanks,
John
___
cisco-nsp mailing 
list  cisco-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at 
<http://puck.nether.net/pipermail/cisco-nsp/>http://puck.nether.net/pipermail/cisco-nsp/






Tim Stevenson, tstev...@cisco.com
Routing & Switching CCIE #5561
Distinguished Technical Marketing Engineer, C

[c-nsp] Request for participation - Arbor 2010 Worldwide Infrastructure Security Report.

2010-10-03 Thread Dobbins, Roland

Request for participation - Arbor 2010 Worldwide Infrastructure Security 
Report. 

-

Folks,

We're in the process of collecting feedback for the 2010 Worldwide 
Infrastructure Security Report (WWISR); this is the Sixth Edition of the 
report, and we'd really be grateful for your participation!  This is the only 
security-focused survey we're aware of which is specifically oriented towards 
those who design, build, operate, and defend public-facing network 
infrastructure/applications/services, and provides the opportunity to share 
your experiences and perspectives with your peers in the operational community, 
as well as to benefit from their experiences and perspectives.

The 2010 Infrastructure Security Survey is up and available for your input.  
You can register to complete the survey via this URL, which will redirect your 
browser to the survey tool (the survey is accessed via http/s):



Once again, we've added several insightful questions from past participants.  
Feedback collection will end as soon as we've reached the desired number of 
respondents (ideally, 100+).

The results will be published in the 2010 Worldwide Infrastructure Security 
Report in January of 2011.  Also, please note that NO personally- or 
organizationally-identifiable information will be shared in any manner.

The 2009 edition of the survey is available here (registration required):



Thanks in advance!

-

---
Roland Dobbins  // 

   Sell your computer and buy a guitar.





___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] High CPU caused by interupt on 7600 router

2010-10-03 Thread Chris Bloor

On 1 Oct 2010, at 10:34, Rin wrote:
> I followed the troubleshooting high CPU Utilization Guide but still cannot
> detect which kind of traffic is punted to CPU. Can anyone in the list show
> me the way how to detect which packets sending to CPU on 7609 router? 

Have you checked the output of 'show cef not-cef-switched' and 'show ip cef 
switching statistics'? These commands may shed some more light on the reason 
for the high CPU if it's punted traffic causing the issue.


Regards,
Chris.


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6 p2p transit link addressing

2010-10-03 Thread Billy Guthrie

cisco...@secureobscure.com wrote:

I have been working through a timeless topic: p2p transit link addressing.

After reading http://tools.ietf.org/tools/rfcmarkup/rfcmarkup.cgi?rfc=3627
and http://tools.ietf.org/html/draft-ietf-ipv6-addr-arch-v4-04 as well a
networkers presentation, various blogs, etc. I thought it was time to
consult people who might actually be dealing with this in production today
in hopes that I can learn from any potential mistakes.

Can I simply avoid the problem of p2p links altogether with unnumbered from
an ipv6 loopback address?
I fear that link local addressing will ruin traceroute information.

Any definitive answers would be greatly appreciated. I would prefer not to
burn up all of my routable address space with p2p /64's however.

Thanks for your time,

John


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

  
There has been some heated debates on the subject with no real 
definitive answer; I have seen /64s, /112, /120, /126 and /127s.


Steve Leibson takes a shot at putting it in real world terms:

*/ So we could assign an IPV6 address to EVERY ATOM ON THE SURFACE OF 
THE EARTH, and still have enough addresses left to do another 100+ 
earths. It isn't remotely likely that we'll run out of IPV6 addresses at 
any time in the future! */


The point being is that this analogy is simply to show that there are an 
awful lot of addresses and that you should not be too concerned with 
conserving address space; however, this is to be taken with a grain of 
salt. I would highly recommend that your address policies should avoid 
the assignments in a unnecessarily wasteful fashion because it does 
start to deteriorate from 2128 to 264 (from 340,282,366,920,938,463,46
3,374,607,431,768,211,456 to 18,446,744,073,709,551,616. In all 
honestly, if you are a large network, then you were more than likely 
allocated a /32 which only gives you a 8-16 bit improvement (16 bit 
improvement if all you assign are /48s). Once you start planning a 
practical address plan, the IPv6 allocation that you were assigned is 
not that big; nothing more than a class B from the v4 land.


Here is some additional reading and references:

*In RFC5375 Section 2.4.1 and 2.4.2*
"Address conservation requirements are less stringent in IPv6, but they 
should still be observed."


*In RFC5375 B.2. Considerations for Subnet Prefixes Longer than /64*
"The following subsections describe subnet prefix values that should be 
avoided - B.2.1. /126 Addresses"


*In RFC3177 Section 3 (Address Delegation Recommendations) *
"/64 when it is known that one and only one subnet is needed by design."

*http://www.getipv6.info/index.php/IPv6_Addressing_Plans 
 - Point 10*.

"IETF expects that you will assign a /64 for point-to-point links"

In my professional opinion, allocate a /64 and assign a /126 to a point 
to point link; do not assign anything else out of that /64.
I am sure there will be tons of response to your question and you will 
not get on definitive answer; you as an engineer will need
to take all the information and based on your design and topology will 
need to make that decision.


Good Luck

Billy
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6 p2p transit link addressing

2010-10-03 Thread Seth Mattinen
On 10/3/2010 16:20, cisco...@secureobscure.com wrote:
> 
> Any definitive answers would be greatly appreciated. I would prefer not to
> burn up all of my routable address space with p2p /64's however.
> 

There is no definitive answer at this point. I use /112's for link nets.

~Seth
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] IPv6 p2p transit link addressing

2010-10-03 Thread cisconsp
I have been working through a timeless topic: p2p transit link addressing.

After reading http://tools.ietf.org/tools/rfcmarkup/rfcmarkup.cgi?rfc=3627
and http://tools.ietf.org/html/draft-ietf-ipv6-addr-arch-v4-04 as well a
networkers presentation, various blogs, etc. I thought it was time to
consult people who might actually be dealing with this in production today
in hopes that I can learn from any potential mistakes.

Can I simply avoid the problem of p2p links altogether with unnumbered from
an ipv6 loopback address?
I fear that link local addressing will ruin traceroute information.

Any definitive answers would be greatly appreciated. I would prefer not to
burn up all of my routable address space with p2p /64's however.

Thanks for your time,

John


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Change to minimum prefix size on APNIC IPv4 address blocks

2010-10-03 Thread Srinivas Chendi



Change to minimum prefix size on APNIC IPv4 address blocks



In response to feedback from the community at APNIC 30, the prefix size
on all APNIC IPv4 address blocks has been changed to a /24.

This change enables the IPv4 transfer policy to apply to all APNIC
ranges. The new size also allows Members to downgrade their membership
by returning unused IPv4 addresses.

Please update your network configurations, such as router filter sizes
based on the IPv4 address block minimum prefix size where required.

For more information on IPv4 prefix sizes, please refer to:

http://www.apnic.net/db/min-alloc.html

If you have any questions please contact the APNIC Helpdesk at:

helpd...@apnic.net


Kind regards,



APNIC Secretariat secretar...@apnic.net
Asia Pacific Network Information Centre (APNIC)   Tel: +61 7 3858 3100
PO Box 2131 Milton, QLD 4064 AustraliaFax: +61 7 3858 3199
Level 1, 33 Park Road, Milton, QLDhttp://www.apnic.net

 * Sent by email to save paper. Print only if necessary.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread Mack O'Brian
Thanks Tim for explaining the terminologies. That was really beneficial. I
have a question on your comments under polarization In a
'multi-tier' network, using the same hash input on each tier results in
traffic after the 1st tier polarizing to a subset What is
'multi-tier' network? Can you shed some light or point to a URL etc. Thanks
again.

Mack


On Sun, Oct 3, 2010 at 10:08 AM, Tim Stevenson  wrote:

> Hi John,
> Let's get everyone agreed on the terminology first, then we can try to
> solve the problem.
>
> * ECMP = Equal cost multipath, it is most typically a term used around
> unicast routing where for a given IP prefix you have multiple equal cost
> next hops and you load share traffic among them based on a hash (or less
> commonly per packet). The hash can be based on several criteria, ie, IPs &
> L4 ports in various combinations.
>
> * CEF = it's interchangeable with 'ECMP' - CEF-based load sharing & ECMP
> mean the same thing.
>
> * Multicast multipath = Uses a hash to select the RPF interface for a
> particular multicast flow when there are ECMP paths back to the source/RP.
> There are options to determine the input values (G, S,G, S,G+NH). This
> feature is not on by default in IOS. If it is not enabled, then IOS will
> choose ONE of the ECMP paths as the RPF (highest neighbor IP) and ALL
> multicast will be pulled over that link.
>
> * Polarization = In a 'multi-tier' network, using the same hash input on
> each tier results in traffic after the 1st tier polarizing to a subset of
> the available links. It's accomodated for by adding a unique ID at each hop
> to the hash input for unicast; for multicast multipath, by including the
> next hop IP as hash input. Whether this really comes into play depends on
> the depth of the network routing topology.
>
> Ok - so given all of the above, with ECMP routing between the 7600s & the
> 4948s, and with multicast multipath already enabled on the 7600 and using
> S,G basic hashing: if the traffic flow is from the 4k->7600, the only option
> you have to improve things is to use S,G + next-hop. I'm not entirely
> convinced it will have a major impact, it depends on whether you have
> multiple levels of routing, one which is getting RPF hash selection pretty
> evenly but then at this layer, polarization is occurring since only a subset
> of traffic is reaching it and the hash input is the same (so only a subset
> of links is being selected as RPF). Based on your description I can't tell
> if that's a possibility in your setup.
>
> Regardless of all that, changing CEF/ECMP hash input on the 4948 will not
> have any significant impact, since that wouldn't affect multicast traffic at
> all, any particular S,G will still have only ONE of those four interfaces as
> an OIF, and that is driven by where the PIM join came in from the 7600,
> which in turn is driven by whether mcast multipath is enabled, and what hash
> is used to select the RPF interface.
>
> Also, clearly, changing CEF/ECMP hash input on the 7600 would have not any
> impact since you're worried about traffic flowing the other direction
> anyway.
>
> Hope that helps,
> Tim
>
> At 09:09 AM 10/3/2010, John Neiberger remarked:
>
>  I'm starting another thread because the topic is migrating. To
>> simplify, we have a 7600 with SUP720-3BXL connected via four routed
>> links to a 4948. The bulk of the traffic on this network is multicast
>> traffic flowing from the 4948 to the 7600 (and onward from there). In
>> order to get the best load sharing over those four links, what is the
>> recommended CEF tuning and ECMP configuration?
>>
>> I ask because we seem to be running into ECMP polarization and/or CEF
>> polarization. We have already decided that we need to be using
>> s-g-hash next-hop-based for ECMP. We're using s-g-hash basic right
>> now. But what about CEF? Do we need to tune CEF along with tuning ECMP
>> for this to work properly? We want the most even distribution of
>> traffic possible. As it is right now, we're seeing serious unequal
>> load sharing. In some cases all of the traffic is going over one link
>> and not even using the other three.
>>
>> Do any of you know the recommended CEF parameters in a situation like
>> this?
>>
>> Thanks,
>> John
>> ___
>> cisco-nsp mailing list  cisco-nsp@puck.nether.net
>> 
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at 
>> http://puck.nether.net/pipermail/cisco-nsp/
>>
>
>
>
>
> Tim Stevenson, tstev...@cisco.com
> Routing & Switching CCIE #5561
> Distinguished Technical Marketing Engineer, Cisco Nexus 7000
> Cisco - http://www.cisco.com
> IP Phone: 408-526-6759
> 
> The contents of this message may be *Cisco Confidential*
> and are intended for the specified recipients only.
>
>
>
> ___

Re: [c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread John Neiberger
Tim,

That's a great point. On the two switches that are most greatly affected,
traffic is only going one way so we could switch back to etherchannels. I'll
suggest that.

Thanks again,
John
On Oct 3, 2010 1:11 PM, "Tim Stevenson"  wrote:
> Hi John,
>
> For 4948->7600, I don't see why you would have channel replication
> issues. 4K is a shared memory buffer system, so there's no
> ingress/egress replication considerations or suboptimal distribution
> across the fabric of the MD packet for egress replication. There's
> only one copy of the packet in shared memory up to the point where a
> channel member is selected. In this situation I'd think a single 4
> port routed EC would be preferred, with all mroutes having the same
> OIF (the EC) but then allowing the EC hash to select the link based
> on L3+L4. But that's theoretical - if you had issues with that then
> I'm not necessarily suggesting you go back to it.
>
> Of course, if you have heavy mcast going back the other direction as
> well, then egress L3 replication w/EC behavior is a consideration.
>
> 2 cents,
> Tim
>
>
> At 10:16 AM 10/3/2010, John Neiberger remarked:
>
>>Thanks, Tim. I've only heard of ECMP in relation to multicast.
>>Thanks for clearing up the terminology. It sounds like chanting the
>>multicast multipath hash is our only option.
>>
>>We used to use etherchannels but we switched to this to help work
>>around some multicast replication issues. If changing the hash
>>doesn't work, we may have to go back to etherchannels. I really hope
>>we don't have to do that.
>>
>>Thanks again!
>>On Oct 3, 2010 11:08 AM, "Tim Stevenson"
>><tstev...@cisco.com> wrote:
>> > Hi John,
>> > Let's get everyone agreed on the terminology first, then we can try
>> > to solve the problem.
>> >
>> > * ECMP = Equal cost multipath, it is most typically a term used
>> > around unicast routing where for a given IP prefix you have multiple
>> > equal cost next hops and you load share traffic among them based on a
>> > hash (or less commonly per packet). The hash can be based on several
>> > criteria, ie, IPs & L4 ports in various combinations.
>> >
>> > * CEF = it's interchangeable with 'ECMP' - CEF-based load sharing &
>> > ECMP mean the same thing.
>> >
>> > * Multicast multipath = Uses a hash to select the RPF interface for a
>> > particular multicast flow when there are ECMP paths back to the
>> > source/RP. There are options to determine the input values (G, S,G,
>> > S,G+NH). This feature is not on by default in IOS. If it is not
>> > enabled, then IOS will choose ONE of the ECMP paths as the RPF
>> > (highest neighbor IP) and ALL multicast will be pulled over that link.
>> >
>> > * Polarization = In a 'multi-tier' network, using the same hash input
>> > on each tier results in traffic after the 1st tier polarizing to a
>> > subset of the available links. It's accomodated for by adding a
>> > unique ID at each hop to the hash input for unicast; for multicast
>> > multipath, by including the next hop IP as hash input. Whether this
>> > really comes into play depends on the depth of the network
>> routing topology.
>> >
>> > Ok - so given all of the above, with ECMP routing between the 7600s &
>> > the 4948s, and with multicast multipath already enabled on the 7600
>> > and using S,G basic hashing: if the traffic flow is from the
>> > 4k->7600, the only option you have to improve things is to use S,G +
>> > next-hop. I'm not entirely convinced it will have a major impact, it
>> > depends on whether you have multiple levels of routing, one which is
>> > getting RPF hash selection pretty evenly but then at this layer,
>> > polarization is occurring since only a subset of traffic is reaching
>> > it and the hash input is the same (so only a subset of links is being
>> > selected as RPF). Based on your description I can't tell if that's a
>> > possibility in your setup.
>> >
>> > Regardless of all that, changing CEF/ECMP hash input on the 4948 will
>> > not have any significant impact, since that wouldn't affect multicast
>> > traffic at all, any particular S,G will still have only ONE of those
>> > four interfaces as an OIF, and that is driven by where the PIM join
>> > came in from the 7600, which in turn is driven by whether mcast
>> > multipath is enabled, and what hash is used to select the RPF
interface.
>> >
>> > Also, clearly, changing CEF/ECMP hash input on the 7600 would have
>> > not any impact since you're worried about traffic flowing the other
>> > direction anyway.
>> >
>> > Hope that helps,
>> > Tim
>> >
>> > At 09:09 AM 10/3/2010, John Neiberger remarked:
>> >
>> >>I'm starting another thread because the topic is migrating. To
>> >>simplify, we have a 7600 with SUP720-3BXL connected via four routed
>> >>links to a 4948. The bulk of the traffic on this network is multicast
>> >>traffic flowing from the 4948 to the 7600 (and onward from there). In
>> >>order to get the best load sharing over those four links, what is the
>> >

Re: [c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread Tim Stevenson

Hi John,

For 4948->7600, I don't see why you would have channel replication 
issues. 4K is a shared memory buffer system, so there's no 
ingress/egress replication considerations or suboptimal distribution 
across the fabric of the MD packet for egress replication. There's 
only one copy of the packet in shared memory up to the point where a 
channel member is selected. In this situation I'd think a single 4 
port routed EC would be preferred, with all mroutes having the same 
OIF (the EC) but then allowing the EC hash to select the link based 
on L3+L4. But that's theoretical - if you had issues with that then 
I'm not necessarily suggesting you go back to it.


Of course, if you have heavy mcast going back the other direction as 
well, then egress L3 replication w/EC behavior is a consideration.


2 cents,
Tim


At 10:16 AM 10/3/2010, John Neiberger remarked:

Thanks, Tim. I've only heard of ECMP in relation to multicast. 
Thanks for clearing up the terminology. It sounds like chanting the 
multicast multipath hash is our only option.


We used to use etherchannels but we switched to this to help work 
around some multicast replication issues. If changing the hash 
doesn't work, we may have to go back to etherchannels. I really hope 
we don't have to do that.


Thanks again!
On Oct 3, 2010 11:08 AM, "Tim Stevenson" 
<tstev...@cisco.com> wrote:

> Hi John,
> Let's get everyone agreed on the terminology first, then we can try
> to solve the problem.
>
> * ECMP = Equal cost multipath, it is most typically a term used
> around unicast routing where for a given IP prefix you have multiple
> equal cost next hops and you load share traffic among them based on a
> hash (or less commonly per packet). The hash can be based on several
> criteria, ie, IPs & L4 ports in various combinations.
>
> * CEF = it's interchangeable with 'ECMP' - CEF-based load sharing &
> ECMP mean the same thing.
>
> * Multicast multipath = Uses a hash to select the RPF interface for a
> particular multicast flow when there are ECMP paths back to the
> source/RP. There are options to determine the input values (G, S,G,
> S,G+NH). This feature is not on by default in IOS. If it is not
> enabled, then IOS will choose ONE of the ECMP paths as the RPF
> (highest neighbor IP) and ALL multicast will be pulled over that link.
>
> * Polarization = In a 'multi-tier' network, using the same hash input
> on each tier results in traffic after the 1st tier polarizing to a
> subset of the available links. It's accomodated for by adding a
> unique ID at each hop to the hash input for unicast; for multicast
> multipath, by including the next hop IP as hash input. Whether this
> really comes into play depends on the depth of the network 
routing topology.

>
> Ok - so given all of the above, with ECMP routing between the 7600s &
> the 4948s, and with multicast multipath already enabled on the 7600
> and using S,G basic hashing: if the traffic flow is from the
> 4k->7600, the only option you have to improve things is to use S,G +
> next-hop. I'm not entirely convinced it will have a major impact, it
> depends on whether you have multiple levels of routing, one which is
> getting RPF hash selection pretty evenly but then at this layer,
> polarization is occurring since only a subset of traffic is reaching
> it and the hash input is the same (so only a subset of links is being
> selected as RPF). Based on your description I can't tell if that's a
> possibility in your setup.
>
> Regardless of all that, changing CEF/ECMP hash input on the 4948 will
> not have any significant impact, since that wouldn't affect multicast
> traffic at all, any particular S,G will still have only ONE of those
> four interfaces as an OIF, and that is driven by where the PIM join
> came in from the 7600, which in turn is driven by whether mcast
> multipath is enabled, and what hash is used to select the RPF interface.
>
> Also, clearly, changing CEF/ECMP hash input on the 7600 would have
> not any impact since you're worried about traffic flowing the other
> direction anyway.
>
> Hope that helps,
> Tim
>
> At 09:09 AM 10/3/2010, John Neiberger remarked:
>
>>I'm starting another thread because the topic is migrating. To
>>simplify, we have a 7600 with SUP720-3BXL connected via four routed
>>links to a 4948. The bulk of the traffic on this network is multicast
>>traffic flowing from the 4948 to the 7600 (and onward from there). In
>>order to get the best load sharing over those four links, what is the
>>recommended CEF tuning and ECMP configuration?
>>
>>I ask because we seem to be running into ECMP polarization and/or CEF
>>polarization. We have already decided that we need to be using
>>s-g-hash next-hop-based for ECMP. We're using s-g-hash basic right
>>now. But what about CEF? Do we need to tune CEF along with tuning ECMP
>>for this to work properly? We want the most even distribution of
>>traffic possible. As it is right now, we're seeing serious unequ

Re: [c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread John Neiberger
Thanks, Tim. I've only heard of ECMP in relation to multicast. Thanks for
clearing up the terminology. It sounds like chanting the multicast multipath
hash is our only option.

We used to use etherchannels but we switched to this to help work around
some multicast replication issues. If changing the hash doesn't work, we may
have to go back to etherchannels. I really hope we don't have to do that.

Thanks again!
On Oct 3, 2010 11:08 AM, "Tim Stevenson"  wrote:
> Hi John,
> Let's get everyone agreed on the terminology first, then we can try
> to solve the problem.
>
> * ECMP = Equal cost multipath, it is most typically a term used
> around unicast routing where for a given IP prefix you have multiple
> equal cost next hops and you load share traffic among them based on a
> hash (or less commonly per packet). The hash can be based on several
> criteria, ie, IPs & L4 ports in various combinations.
>
> * CEF = it's interchangeable with 'ECMP' - CEF-based load sharing &
> ECMP mean the same thing.
>
> * Multicast multipath = Uses a hash to select the RPF interface for a
> particular multicast flow when there are ECMP paths back to the
> source/RP. There are options to determine the input values (G, S,G,
> S,G+NH). This feature is not on by default in IOS. If it is not
> enabled, then IOS will choose ONE of the ECMP paths as the RPF
> (highest neighbor IP) and ALL multicast will be pulled over that link.
>
> * Polarization = In a 'multi-tier' network, using the same hash input
> on each tier results in traffic after the 1st tier polarizing to a
> subset of the available links. It's accomodated for by adding a
> unique ID at each hop to the hash input for unicast; for multicast
> multipath, by including the next hop IP as hash input. Whether this
> really comes into play depends on the depth of the network routing
topology.
>
> Ok - so given all of the above, with ECMP routing between the 7600s &
> the 4948s, and with multicast multipath already enabled on the 7600
> and using S,G basic hashing: if the traffic flow is from the
> 4k->7600, the only option you have to improve things is to use S,G +
> next-hop. I'm not entirely convinced it will have a major impact, it
> depends on whether you have multiple levels of routing, one which is
> getting RPF hash selection pretty evenly but then at this layer,
> polarization is occurring since only a subset of traffic is reaching
> it and the hash input is the same (so only a subset of links is being
> selected as RPF). Based on your description I can't tell if that's a
> possibility in your setup.
>
> Regardless of all that, changing CEF/ECMP hash input on the 4948 will
> not have any significant impact, since that wouldn't affect multicast
> traffic at all, any particular S,G will still have only ONE of those
> four interfaces as an OIF, and that is driven by where the PIM join
> came in from the 7600, which in turn is driven by whether mcast
> multipath is enabled, and what hash is used to select the RPF interface.
>
> Also, clearly, changing CEF/ECMP hash input on the 7600 would have
> not any impact since you're worried about traffic flowing the other
> direction anyway.
>
> Hope that helps,
> Tim
>
> At 09:09 AM 10/3/2010, John Neiberger remarked:
>
>>I'm starting another thread because the topic is migrating. To
>>simplify, we have a 7600 with SUP720-3BXL connected via four routed
>>links to a 4948. The bulk of the traffic on this network is multicast
>>traffic flowing from the 4948 to the 7600 (and onward from there). In
>>order to get the best load sharing over those four links, what is the
>>recommended CEF tuning and ECMP configuration?
>>
>>I ask because we seem to be running into ECMP polarization and/or CEF
>>polarization. We have already decided that we need to be using
>>s-g-hash next-hop-based for ECMP. We're using s-g-hash basic right
>>now. But what about CEF? Do we need to tune CEF along with tuning ECMP
>>for this to work properly? We want the most even distribution of
>>traffic possible. As it is right now, we're seeing serious unequal
>>load sharing. In some cases all of the traffic is going over one link
>>and not even using the other three.
>>
>>Do any of you know the recommended CEF parameters in a situation like
this?
>>
>>Thanks,
>>John
>>___
>>cisco-nsp mailing list cisco-nsp@puck.nether.net
>>
https://puck.nether.net/mailman/listinfo/cisco-nsp
>>archive at
>>
http://puck.nether.net/pipermail/cisco-nsp/
>
>
>
>
> Tim Stevenson, tstev...@cisco.com
> Routing & Switching CCIE #5561
> Distinguished Technical Marketing Engineer, Cisco Nexus 7000
> Cisco - http://www.cisco.com
> IP Phone: 408-526-6759
> 
> The contents of this message may be *Cisco Confidential*
> and are intended for the specified recipients only.
>
>
_

Re: [c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread Tim Stevenson

Hi John,
Let's get everyone agreed on the terminology first, then we can try 
to solve the problem.


* ECMP = Equal cost multipath, it is most typically a term used 
around unicast routing where for a given IP prefix you have multiple 
equal cost next hops and you load share traffic among them based on a 
hash (or less commonly per packet). The hash can be based on several 
criteria, ie, IPs & L4 ports in various combinations.


* CEF = it's interchangeable with 'ECMP' - CEF-based load sharing & 
ECMP mean the same thing.


* Multicast multipath = Uses a hash to select the RPF interface for a 
particular multicast flow when there are ECMP paths back to the 
source/RP. There are options to determine the input values (G, S,G, 
S,G+NH). This feature is not on by default in IOS. If it is not 
enabled, then IOS will choose ONE of the ECMP paths as the RPF 
(highest neighbor IP) and ALL multicast will be pulled over that link.


* Polarization = In a 'multi-tier' network, using the same hash input 
on each tier results in traffic after the 1st tier polarizing to a 
subset of the available links. It's accomodated for by adding a 
unique ID at each hop to the hash input for unicast; for multicast 
multipath, by including the next hop IP as hash input. Whether this 
really comes into play depends on the depth of the network routing topology.


Ok - so given all of the above, with ECMP routing between the 7600s & 
the 4948s, and with multicast multipath already enabled on the 7600 
and using S,G basic hashing: if the traffic flow is from the 
4k->7600, the only option you have to improve things is to use S,G + 
next-hop. I'm not entirely convinced it will have a major impact, it 
depends on whether you have multiple levels of routing, one which is 
getting RPF hash selection pretty evenly but then at this layer, 
polarization is occurring since only a subset of traffic is reaching 
it and the hash input is the same (so only a subset of links is being 
selected as RPF). Based on your description I can't tell if that's a 
possibility in your setup.


Regardless of all that, changing CEF/ECMP hash input on the 4948 will 
not have any significant impact, since that wouldn't affect multicast 
traffic at all, any particular S,G will still have only ONE of those 
four interfaces as an OIF, and that is driven by where the PIM join 
came in from the 7600, which in turn is driven by whether mcast 
multipath is enabled, and what hash is used to select the RPF interface.


Also, clearly, changing CEF/ECMP hash input on the 7600 would have 
not any impact since you're worried about traffic flowing the other 
direction anyway.


Hope that helps,
Tim

At 09:09 AM 10/3/2010, John Neiberger remarked:


I'm starting another thread because the topic is migrating. To
simplify, we have a 7600 with SUP720-3BXL connected via four routed
links to a 4948. The bulk of the traffic on this network is multicast
traffic flowing from the 4948 to the 7600 (and onward from there). In
order to get the best load sharing over those four links, what is the
recommended CEF tuning and ECMP configuration?

I ask because we seem to be running into ECMP polarization and/or CEF
polarization. We have already decided that we need to be using
s-g-hash next-hop-based for ECMP. We're using s-g-hash basic right
now. But what about CEF? Do we need to tune CEF along with tuning ECMP
for this to work properly? We want the most even distribution of
traffic possible. As it is right now, we're seeing serious unequal
load sharing. In some cases all of the traffic is going over one link
and not even using the other three.

Do any of you know the recommended CEF parameters in a situation like this?

Thanks,
John
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at 
http://puck.nether.net/pipermail/cisco-nsp/





Tim Stevenson, tstev...@cisco.com
Routing & Switching CCIE #5561
Distinguished Technical Marketing Engineer, Cisco Nexus 7000
Cisco - http://www.cisco.com
IP Phone: 408-526-6759

The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] CEF Tuning with ECMP?

2010-10-03 Thread John Neiberger
I'm starting another thread because the topic is migrating. To
simplify, we have a 7600 with SUP720-3BXL connected via four routed
links to a 4948. The bulk of the traffic on this network is multicast
traffic flowing from the 4948 to the 7600 (and onward from there). In
order to get the best load sharing over those four links, what is the
recommended CEF tuning and ECMP configuration?

I ask because we seem to be running into ECMP polarization and/or CEF
polarization. We have already decided that we need to be using
s-g-hash next-hop-based for ECMP. We're using s-g-hash basic right
now. But what about CEF? Do we need to tune CEF along with tuning ECMP
for this to work properly? We want the most even distribution of
traffic possible. As it is right now, we're seeing serious unequal
load sharing. In some cases all of the traffic is going over one link
and not even using the other three.

Do any of you know the recommended CEF parameters in a situation like this?

Thanks,
John
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ECMP failing over time?

2010-10-03 Thread John Neiberger
Interesting. That would be something if we were really running into
CEF polarization instead of (or perhaps in addition to) ECMP
polarization. I'm reading some documents about it now and I've also
updated our TAC case to get some guidance. We maintenance scheduled
tonight at midnight to change the ECMP hash to next-hop-based. If we
need to make a CEF change, we'd better do it at the same time.

On Sun, Oct 3, 2010 at 5:55 AM, Chris Evans  wrote:
> I beleive what you guys are referring to is cef polarization..   we
> implement extra commands at every other tier to fight this..  I just can't
> remember them right now lol.
>
> On Oct 3, 2010 3:47 AM, "Reinhold Fischer"  wrote:
>> Is "ip multicast multipath" enabled? Take care of the usage guidlines
>> and limitations before enabling it ...
>>
>> hth
>> Reinhold
>>
>> On Sun, Oct 3, 2010 at 7:10 AM, John Neiberger 
>> wrote:
>>> This is entirely multicast. We used the s-g-hash to lock each S,G to a
>>> link, but we didn't think it through. We really should have started
>>> out using the next-hop-based hash so that the same S,G can be served
>>> by any link in the group. With s-g-hash, it always gets locked to the
>>> same bundle.
>>>
>>> However, I just thought of another potential culprit. I'm going to
>>> have to think it through, though.
>>>
>>> On Sat, Oct 2, 2010 at 10:17 PM, Keegan Holley
>>>  wrote:
 I've seen similar effects.  I'm not sure there's a method to evenly
 distribute traffic for an indefinite period.  I'm also not sure what
 you're
 routing, but the problems I've seen are usually caused by the fact that
 each
 flow/hash result differs in size and duration.  Adding extra variables
 to
 the equation always helps, but it's almost impossible to keep an even
 spread.  I suppose your current goal is to simply stop the outages
 though.


 On Sat, Oct 2, 2010 at 7:17 PM, John Neiberger 
 wrote:
>
> I hate to answer my own question, but I think I figured it out. We're
> using s-g-hash basic, which is prone to polarization. I think that's
> what we're seeing. Our traffic has become polarized and has developed
> an affinity for a subset of links in our "bundles". I'm recommending
> that we switch to s-g-hash next-hop-based to see if that resolves the
> problem.
>
> On Sat, Oct 2, 2010 at 2:18 PM, John Neiberger 
> wrote:
> > We converted several connections last week from Etherchannels to
> > routed links with ECMP. We verified that traffic was load-sharing
> > over
> > those links after making the change. Now, a week later, we are seeing
> > instances where traffic is preferring one or two links out of each
> > "bundle". In some cases all the traffic is flowing over a single link
> > in a four-link setup. This is overloading those connections and we
> > can't figure out why. We are using s-g-hash basic. Should we switch
> > to
> > s-g-hash next-hop-based?
> >
> > This is causing production issues right now, so I've opened up a TAC
> > case, but I thought I'd ask here, as well, just in case someone had
> > seen this before.
> >
> > Thanks,
> > John
> >
> ___
> cisco-nsp mailing list  cisco-...@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>


>>>
>>> ___
>>> cisco-nsp mailing list  cisco-...@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>> ___
>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ECMP failing over time?

2010-10-03 Thread John Neiberger
Yes, ECMP is enabled. It's just not behaving the way we initially expected.

On Sun, Oct 3, 2010 at 1:39 AM, Reinhold Fischer
 wrote:
> Is "ip multicast multipath" enabled? Take care of the usage guidlines
> and limitations before enabling it ...
>
> hth
> Reinhold
>
> On Sun, Oct 3, 2010 at 7:10 AM, John Neiberger  wrote:
>> This is entirely multicast. We used the s-g-hash to lock each S,G to a
>> link, but we didn't think it through. We really should have started
>> out using the next-hop-based hash so that the same S,G can be served
>> by any link in the group. With s-g-hash, it always gets locked to the
>> same bundle.
>>
>> However, I just thought of another potential culprit. I'm going to
>> have to think it through, though.
>>
>> On Sat, Oct 2, 2010 at 10:17 PM, Keegan Holley
>>  wrote:
>>> I've seen similar effects.  I'm not sure there's a method to evenly
>>> distribute traffic for an indefinite period.  I'm also not sure what you're
>>> routing, but the problems I've seen are usually caused by the fact that each
>>> flow/hash result differs in size and duration.  Adding extra variables to
>>> the equation always helps, but it's almost impossible to keep an even
>>> spread.  I suppose your current goal is to simply stop the outages though.
>>>
>>>
>>> On Sat, Oct 2, 2010 at 7:17 PM, John Neiberger  wrote:

 I hate to answer my own question, but I think I figured it out. We're
 using s-g-hash basic, which is prone to polarization. I think that's
 what we're seeing. Our traffic has become polarized and has developed
 an affinity for a subset of links in our "bundles". I'm recommending
 that we switch to s-g-hash next-hop-based to see if that resolves the
 problem.

 On Sat, Oct 2, 2010 at 2:18 PM, John Neiberger 
 wrote:
 > We converted several connections last week from Etherchannels to
 > routed links with ECMP. We verified that traffic was load-sharing over
 > those links after making the change. Now, a week later, we are seeing
 > instances where traffic is preferring one or two links out of each
 > "bundle". In some cases all the traffic is flowing over a single link
 > in a four-link setup. This is overloading those connections and we
 > can't figure out why. We are using s-g-hash basic. Should we switch to
 > s-g-hash next-hop-based?
 >
 > This is causing production issues right now, so I've opened up a TAC
 > case, but I thought I'd ask here, as well, just in case someone had
 > seen this before.
 >
 > Thanks,
 > John
 >
 ___
 cisco-nsp mailing list  cisco-...@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/


>>>
>>>
>>
>> ___
>> cisco-nsp mailing list  cisco-...@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] WiMAX Network

2010-10-03 Thread Heath Jones
> my network topology is a main ring and some sub rings connected to the main 
> one and some sites are star connected
> anyway if i am offering some basic Internet connectivity such as WiMAX 
> service and Leased Line what is best for me , to make L2 or L3 ??
> and i used L3 and enabled OSPF , is it best to make only single Area ? and if 
> multiple areas should i implement stub concept ?

Hi Mohammad,

STP is not a good solution in ring environments, instead there are
various ethernet ring protection protocols about.

I would suggest running OSPF. With regards to the number of areas,
that purely depends on the number of prefixes you will be flooding,
and the number of DR, BDR election participants in each L2 segment.

It is also worth bearing in mind that if down the road your topology
changes, (say a new link from a router in area 2 to a router in area
3), what changes would you need to make to your configurations and how
would this impact the desired traffic paths?

Also worth considering, is your customer requirements.. If these are
not kept in the back of your mind while designing the network at the
start, later on it could be a real challenge to implement!
Are you going to be selling these people VPN services? Are they to run
over MPLS? What about multicast? etc etc


I hope this helps a little :)


Cheers
Heath
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] WiMAX Network

2010-10-03 Thread Mohammad Khalil

hi all 

my network topology is a main ring and some sub rings connected to the main one 
and some sites are star connected
anyway if i am offering some basic Internet connectivity such as WiMAX service 
and Leased Line what is best for me , to make L2 or L3 ??
and i used L3 and enabled OSPF , is it best to make only single Area ? and if 
multiple areas should i implement stub concept ?


Thanks
  
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ECMP failing over time?

2010-10-03 Thread Chris Evans
http://m.mcisco.com/pocket.paper?url2=http%3a%2f%2fwww.cisco.com%2fen%2fUS%2fdocs%2fsolutions%2fEnterprise%2fCampus%2fHA_campus_DG%2fhacampusdg.pdf

I'm doing this from my phone so I am not sure how that url comes through.
On Oct 3, 2010 7:55 AM, "Chris Evans"  wrote:
> I beleive what you guys are referring to is cef polarization.. we
> implement extra commands at every other tier to fight this.. I just can't
> remember them right now lol.
> On Oct 3, 2010 3:47 AM, "Reinhold Fischer" 
wrote:
>> Is "ip multicast multipath" enabled? Take care of the usage guidlines
>> and limitations before enabling it ...
>>
>> hth
>> Reinhold
>>
>> On Sun, Oct 3, 2010 at 7:10 AM, John Neiberger 
> wrote:
>>> This is entirely multicast. We used the s-g-hash to lock each S,G to a
>>> link, but we didn't think it through. We really should have started
>>> out using the next-hop-based hash so that the same S,G can be served
>>> by any link in the group. With s-g-hash, it always gets locked to the
>>> same bundle.
>>>
>>> However, I just thought of another potential culprit. I'm going to
>>> have to think it through, though.
>>>
>>> On Sat, Oct 2, 2010 at 10:17 PM, Keegan Holley
>>>  wrote:
 I've seen similar effects. I'm not sure there's a method to evenly
 distribute traffic for an indefinite period. I'm also not sure what
> you're
 routing, but the problems I've seen are usually caused by the fact that
> each
 flow/hash result differs in size and duration. Adding extra variables
> to
 the equation always helps, but it's almost impossible to keep an even
 spread. I suppose your current goal is to simply stop the outages
> though.


 On Sat, Oct 2, 2010 at 7:17 PM, John Neiberger 
> wrote:
>
> I hate to answer my own question, but I think I figured it out. We're
> using s-g-hash basic, which is prone to polarization. I think that's
> what we're seeing. Our traffic has become polarized and has developed
> an affinity for a subset of links in our "bundles". I'm recommending
> that we switch to s-g-hash next-hop-based to see if that resolves the
> problem.
>
> On Sat, Oct 2, 2010 at 2:18 PM, John Neiberger 
> wrote:
> > We converted several connections last week from Etherchannels to
> > routed links with ECMP. We verified that traffic was load-sharing
> over
> > those links after making the change. Now, a week later, we are
seeing
> > instances where traffic is preferring one or two links out of each
> > "bundle". In some cases all the traffic is flowing over a single
link
> > in a four-link setup. This is overloading those connections and we
> > can't figure out why. We are using s-g-hash basic. Should we switch
> to
> > s-g-hash next-hop-based?
> >
> > This is causing production issues right now, so I've opened up a TAC
> > case, but I thought I'd ask here, as well, just in case someone had
> > seen this before.
> >
> > Thanks,
> > John
> >
> ___
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>


>>>
>>> ___
>>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>> ___
>> cisco-nsp mailing list cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ECMP failing over time?

2010-10-03 Thread Chris Evans
I beleive what you guys are referring to is cef polarization..   we
implement extra commands at every other tier to fight this..  I just can't
remember them right now lol.
On Oct 3, 2010 3:47 AM, "Reinhold Fischer"  wrote:
> Is "ip multicast multipath" enabled? Take care of the usage guidlines
> and limitations before enabling it ...
>
> hth
> Reinhold
>
> On Sun, Oct 3, 2010 at 7:10 AM, John Neiberger 
wrote:
>> This is entirely multicast. We used the s-g-hash to lock each S,G to a
>> link, but we didn't think it through. We really should have started
>> out using the next-hop-based hash so that the same S,G can be served
>> by any link in the group. With s-g-hash, it always gets locked to the
>> same bundle.
>>
>> However, I just thought of another potential culprit. I'm going to
>> have to think it through, though.
>>
>> On Sat, Oct 2, 2010 at 10:17 PM, Keegan Holley
>>  wrote:
>>> I've seen similar effects.  I'm not sure there's a method to evenly
>>> distribute traffic for an indefinite period.  I'm also not sure what
you're
>>> routing, but the problems I've seen are usually caused by the fact that
each
>>> flow/hash result differs in size and duration.  Adding extra variables
to
>>> the equation always helps, but it's almost impossible to keep an even
>>> spread.  I suppose your current goal is to simply stop the outages
though.
>>>
>>>
>>> On Sat, Oct 2, 2010 at 7:17 PM, John Neiberger 
wrote:

 I hate to answer my own question, but I think I figured it out. We're
 using s-g-hash basic, which is prone to polarization. I think that's
 what we're seeing. Our traffic has become polarized and has developed
 an affinity for a subset of links in our "bundles". I'm recommending
 that we switch to s-g-hash next-hop-based to see if that resolves the
 problem.

 On Sat, Oct 2, 2010 at 2:18 PM, John Neiberger 
 wrote:
 > We converted several connections last week from Etherchannels to
 > routed links with ECMP. We verified that traffic was load-sharing
over
 > those links after making the change. Now, a week later, we are seeing
 > instances where traffic is preferring one or two links out of each
 > "bundle". In some cases all the traffic is flowing over a single link
 > in a four-link setup. This is overloading those connections and we
 > can't figure out why. We are using s-g-hash basic. Should we switch
to
 > s-g-hash next-hop-based?
 >
 > This is causing production issues right now, so I've opened up a TAC
 > case, but I thought I'd ask here, as well, just in case someone had
 > seen this before.
 >
 > Thanks,
 > John
 >
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/


>>>
>>>
>>
>> ___
>> cisco-nsp mailing list  cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>
> ___
> cisco-nsp mailing list cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ECMP failing over time?

2010-10-03 Thread Reinhold Fischer
Is "ip multicast multipath" enabled? Take care of the usage guidlines
and limitations before enabling it ...

hth
Reinhold

On Sun, Oct 3, 2010 at 7:10 AM, John Neiberger  wrote:
> This is entirely multicast. We used the s-g-hash to lock each S,G to a
> link, but we didn't think it through. We really should have started
> out using the next-hop-based hash so that the same S,G can be served
> by any link in the group. With s-g-hash, it always gets locked to the
> same bundle.
>
> However, I just thought of another potential culprit. I'm going to
> have to think it through, though.
>
> On Sat, Oct 2, 2010 at 10:17 PM, Keegan Holley
>  wrote:
>> I've seen similar effects.  I'm not sure there's a method to evenly
>> distribute traffic for an indefinite period.  I'm also not sure what you're
>> routing, but the problems I've seen are usually caused by the fact that each
>> flow/hash result differs in size and duration.  Adding extra variables to
>> the equation always helps, but it's almost impossible to keep an even
>> spread.  I suppose your current goal is to simply stop the outages though.
>>
>>
>> On Sat, Oct 2, 2010 at 7:17 PM, John Neiberger  wrote:
>>>
>>> I hate to answer my own question, but I think I figured it out. We're
>>> using s-g-hash basic, which is prone to polarization. I think that's
>>> what we're seeing. Our traffic has become polarized and has developed
>>> an affinity for a subset of links in our "bundles". I'm recommending
>>> that we switch to s-g-hash next-hop-based to see if that resolves the
>>> problem.
>>>
>>> On Sat, Oct 2, 2010 at 2:18 PM, John Neiberger 
>>> wrote:
>>> > We converted several connections last week from Etherchannels to
>>> > routed links with ECMP. We verified that traffic was load-sharing over
>>> > those links after making the change. Now, a week later, we are seeing
>>> > instances where traffic is preferring one or two links out of each
>>> > "bundle". In some cases all the traffic is flowing over a single link
>>> > in a four-link setup. This is overloading those connections and we
>>> > can't figure out why. We are using s-g-hash basic. Should we switch to
>>> > s-g-hash next-hop-based?
>>> >
>>> > This is causing production issues right now, so I've opened up a TAC
>>> > case, but I thought I'd ask here, as well, just in case someone had
>>> > seen this before.
>>> >
>>> > Thanks,
>>> > John
>>> >
>>> ___
>>> cisco-nsp mailing list  cisco-...@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>>
>>
>>
>
> ___
> cisco-nsp mailing list  cisco-...@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/