Re: [c-nsp] Sup2T netflow problems

2014-02-07 Thread Chris Welti

192.168.75.2Am 06/02/14 10:41, schrieb Peter Rathlev:

On Wed, 2014-02-05 at 14:28 +0200, Henri Grönroos wrote:

I think you are encountering CSCui17732 which is present in 15.1.2-SY1
too.


Thank you for the pointer! According to the bug toolkit the 15.0SY
versions are not affected. Can anybody confirm this?


Unfortunately this bug is present in *all* Sup2T firmware releases so far.
I've seen it in 12.2, 15.0 and 15.1 images.


Downgrading to
15.0(1)SY5 would probably be okay for us. The few 15.1SY specific
features we've started using are not critical (LDP/IGP sync, pw
signalling).

Any really good reason not to downgrade to 15.0SY?


Yep, because that bug is also present in 15.0SY5.
Actually there is *no* officially released firmware that has the bugfix yet as 
far as I know.
The good thing is it usually only happens after a few weeks/months of uptime 
and only if you have NF export running.
If you desperately need it, you should contact your SE to get a special image.

Regards,
Chris

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Some basic vPC questions (nexus 3000)

2014-02-07 Thread Drew Weaver
Greetings,

We are purchasing two Nexus 3000 switches to aggregate some 48 port 1G switches 
and plan on using vPC for redundancy.

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps11541/white_paper_c11-685753.html

When I was reading the vPC whitepaper (referenced above) for the Nexus 3000 it 
mentions two different types of vPC link:

#1 The vPC peer keepalive link (whitepaper suggests that this traffic can run 
over the mgmt. interface)
#2 The vPC peer link (needs to be at least two 10G ports in a port channel)

My questions are 
   
   What happens to the traffic if the vPC peer keep alive communication 
fails between the two vPC members?
   Depending on the answer to the question above, is it possible to make 
the vPC peer keepalive link redundant?
   
  Can you add more members to the vPC peer link port channel without 
disrupting traffic flow?
   
   Has anyone run into any unexpected caveats or amusing issues with vPC 
that they would like to share?
   
Thanks and happy Friday!
-Drew
   
   
   
   
   

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] understanding BFD echo mode

2014-02-07 Thread Dimitris Befas
Hi Martin,

Exactly. Both modes (control  echo) are used when using the echo mode. The
hardware handled echo packets are not eligible to remote cpu fluctuations
because of the fact that we use the actual remote forwarding router
mechanism. So you have more reliable and fast failure detection with echo
mode.
Farther, because with echo mode you actually use control (or asycnhronous)
mode also, you may configure the slow timer to slow down the probable
reaction of the control bfd packets and the cpu load that these packets
intoduce.

http://www.cisco.com/en/US/docs/switches/datacenter/sw/6_x/nx-os/interfaces/
configuration/guide/if_bfd.pdf
page 3

Dimitris

-Original Message-
From: cisco-nsp [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of
Martin T
Sent: Thursday, February 6, 2014 8:46 PM
To: cisco-nsp@puck.nether.net
Subject: [c-nsp] understanding BFD echo mode

Hi,

some Cisco routers support BFD in echo mode. Am I correct that BFD echo
packets are send besides BFD control messages once echo mode is enabled
and Cisco routers are able to handle former in hardware while BFD control
messages are punted?



regards,
Martin
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Some basic vPC questions (nexus 3000)

2014-02-07 Thread Tim Stevenson

Hi Drew, please see inline below:

At 07:50 AM 2/7/2014  Friday, Drew Weaver remarked:

Greetings,

We are purchasing two Nexus 3000 switches to aggregate some 48 port 
1G switches and plan on using vPC for redundancy.


http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps11541/white_paper_c11-685753.html

When I was reading the vPC whitepaper (referenced above) for the 
Nexus 3000 it mentions two different types of vPC link:


#1 The vPC peer keepalive link (whitepaper suggests that this 
traffic can run over the mgmt. interface)

#2 The vPC peer link (needs to be at least two 10G ports in a port channel)

My questions are

   What happens to the traffic if the vPC peer keep alive 
communication fails between the two vPC members?



Nothing. You are of course alerted to that fact, but loss of 
bidirectional PKA communication does not impact data plane forwarding.



   Depending on the answer to the question above, is it 
possible to make the vPC peer keepalive link redundant?


You can, yes, it can be a port-channel for example.




  Can you add more members to the vPC peer link port channel 
without disrupting traffic flow?



There is potential for some disruption when you add/remove links from 
a port-channel, as hash buckets are shuffled around.





   Has anyone run into any unexpected caveats or amusing issues 
with vPC that they would like to share?



I yield the floor... ;)


Hope that helps,
Tim






Thanks and happy Friday!
-Drew






___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/





Tim Stevenson, tstev...@cisco.com
Routing  Switching CCIE #5561
Distinguished Technical Marketing Engineer, Cisco Nexus 7000
Cisco - http://www.cisco.com
IP Phone: 408-526-6759

The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] vc ipv6 dhcp ssm packet

2014-02-07 Thread Arne Larsen / Region Nordjylland
Hi all.

We are having a problem with Windows DHCPv6 SSM multicast packets on our 
network.
We aren't running ipv6 or multicast, so all SSM packets are sent to the 
processor on the 6500 which is a sup720-3B with a pfc3b cards
This makes the load on the cpu very high.
We are running s72033-ipservicesk9_wan-mz.122-18.SXF13.bin on one box and
s72033-adventerprisek9_wan-mz.122-33.SXJ2.bin on the other.
Can someone give me a hint how to get around this.

/Arne


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/