Re: [c-nsp] Nexus 3548 and VDCs

2018-02-26 Thread Tim Stevenson

Hi Mike, please see inline below:

At 06:06 AM 2/25/2018  Sunday, Mike Hammett gushed:
We recently upgraded our Nexus 3548 from version 6.0(2)A1(1d) to 
version 7.0(3)I7(3). We can get in from one of the trunk ports 
feeding the switch, but there is no traffic passing through to other ports.


I'm not fully versed on the process as I wasn't the one doing it, 
just putting in keyboard time looking for resolutions.


1) Cisco doesn't seem to list any documentation for NX-OS 7 on the 
3548, yet 7 is the only available software download on their support 
site. Some of the links take me to the 3600. No idea if that's correct.



N3548 is integrated into the "Nexus 3000" doc set for 7.x. The 7.x 
train is now a converged software release train with a single "nxos" 
image for all n9k and n3k, including 3548. One exception is the 3600 
series (broadcom jericho+ based), the plan is to integrate that 
platform later this year. Definitely don't use the 3600 docs for 3548.


The image management and boot model is a bit different in 7.x, there 
is no kickstart for example just one 'nxos' image. The process of 
converting/upgrading from 6.x to 7.x can be a bit tricky, if not 
performed according to the documentation here:


https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus3000/sw/upgrade/7_x/b_Cisco_Nexus_3500_Series_NX-OS_Software_Upgrade_and_Downgrade_Guide_Release_7x/b_Cisco_Nexus_3500_Series_NX-OS_Software_Upgrade_and_Downgrade_Guide_Release_7x_chapter_010.html

For one thing you mentioned you went from 6.0(2)A1(1d) to version 
7.0(3)I7(3). Not sure how you did that, but the doc above does state 
"An upgrade to Cisco NX-OS Release 7.0(3)I7(2) is supported only from 
Cisco NX-OS Release 6.0(2)A8(7b) or higher releases."


In any case, suggest you engage TAC to help recover this switch if 
you have not done so already.



2) It looks like along the way, we got a vdc section added to our 
config, but the only instance of vdc I can find applies to 7000 
series switches. Is that supposed to be there? Should I just follow 
the 7000's config for VDCs?



This is expected, all nexus in 7.x will have a 'default VDC' config 
section. Only n7k allows you to configure additional VDCs. There 
shouldn't be much if any config modification required in the VDC config.


Hope that helps,
Tim









-
Mike Hammett
Intelligent Computing Solutions

Midwest Internet Exchange

The Brothers WISP

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/






Tim Stevenson, tstev...@cisco.com
Routing & Switching CCIE #5561
Distinguished Engineer, Technical Marketing
Data Center Switching
Cisco - http://www.cisco.com
+1(408)526-6759

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Nexus 3064 QoS

2018-02-26 Thread BASSAGET Cédric
Working on it for 2 days. Posting this message, found
https://supportforums.cisco.com/t5/lan-switching-and-routing/nexus-3000-qos-traffic-shaping/td-p/2532476
20 minutes after...

Problem seems to be solved...
Regards
Cédric

2018-02-26 10:14 GMT+01:00 BASSAGET Cédric :

> Hello,
> I'm trying to make QoS work on a NX3064. I'm not able to make it work in
> the simpliest lab ever... Can anybody tell me what's wrong ?
>
> I need to prioritize DSCP EF marked trafic from host B to host A.
>
> Host B (192.168.41.2) <-> NX eth1/48 vlan 41
> Host A (192.168.41.1) <-> NX eth1/24 vlan 41
>
> Host B is 1Gb/s
> Host A is 100Mb/s
>
> show run :
>
> class-map type qos match-all qos-match-prio
>   match dscp 46
> class-map type queuing qos-queue-prio
>   match qos-group 1
>
> policy-map type qos qos-match-trafic
>   class qos-match-prio
> set qos-group 1
>   class class-default
> policy-map type queuing qos-queue-trafic
>   class type queuing qos-queue-prio
> priority
>   class type queuing class-default
> shape kbps 5
>
> interface Ethernet1/24
>   switchport mode trunk
>   switchport trunk allowed vlan 41
>   speed 100
>   duplex full
>   bandwidth 10
>   service-policy type queuing output qos-queue-trafic
> interface Ethernet1/48
>   switchport mode trunk
>   switchport trunk allowed vlan 41
>   service-policy type qos input qos-match-trafic
>
>
> Using iperf from host B to host A (udp, bandwidth = 110Mb/s), I get 50
> mb/s of trafic on host A. Normal.
>
> Trying to ping from host B to host A : ping -Q 0xB8 192.168.41.1 : about
> 50% loss. Here's my problem.
>
> show policy-map on ingress interface show that my icmp trafic is
> classified and attached to qos-group 1 :
>
> N3K-eqx-pa3-1# show policy-map interface ethernet 1/48 type qos
>
> Global statistics status :   enabled
> Global QoS policy statistics status :   enabled
>
> Ethernet1/48
>
>   Service-policy (qos) input:   qos-match-trafic
> SNMP Policy Index:  285213088
>
> Class-map (qos):   qos-match-prio (match-all)
>
>  Slot 1
> 6845 packets
>  Aggregate forwarded :
> 6845 packets
>   Match: dscp 46
>   set qos-group 1
>
> Class-map (qos):   class-default (match-any)
>
>  Slot 1
> 12776799 packets
>  Aggregate forwarded :
> 12776799 packets
>   Match: any
> 12776799 packets
>   set qos-group 0
>
>
> N3K-eqx-pa3-1# show policy-map interface ethernet 1/24 type queuing
>
> Global statistics status :   enabled
> Global QoS policy statistics status :   enabled
>
> Ethernet1/24
>
>   Service-policy (queuing) input:   default-in-policy
> SNMP Policy Index:  301990027
>
> Class-map (queuing):   class-default (match-any)
>   bandwidth percent 100
>   queue dropped pkts : 0
>   queue depth in bytes : 0
>
>   Service-policy (queuing) output:   qos-queue-trafic
> SNMP Policy Index:  301990282
>
> Class-map (queuing):   qos-queue-prio (match-any)
>   priority level 1
>   queue dropped pkts : 0
>   queue depth in bytes : 0
>
> Class-map (queuing):   class-default (match-any)
>   bandwidth percent 100
>   shape kbps 5  min 0
>   queue dropped pkts : 0
>   queue depth in bytes : 0
>
> I don't see any stats here ...
>
> Thanks for your help.
> Regards
>
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] Nexus 3064 QoS

2018-02-26 Thread BASSAGET Cédric
Hello,
I'm trying to make QoS work on a NX3064. I'm not able to make it work in
the simpliest lab ever... Can anybody tell me what's wrong ?

I need to prioritize DSCP EF marked trafic from host B to host A.

Host B (192.168.41.2) <-> NX eth1/48 vlan 41
Host A (192.168.41.1) <-> NX eth1/24 vlan 41

Host B is 1Gb/s
Host A is 100Mb/s

show run :

class-map type qos match-all qos-match-prio
  match dscp 46
class-map type queuing qos-queue-prio
  match qos-group 1

policy-map type qos qos-match-trafic
  class qos-match-prio
set qos-group 1
  class class-default
policy-map type queuing qos-queue-trafic
  class type queuing qos-queue-prio
priority
  class type queuing class-default
shape kbps 5

interface Ethernet1/24
  switchport mode trunk
  switchport trunk allowed vlan 41
  speed 100
  duplex full
  bandwidth 10
  service-policy type queuing output qos-queue-trafic
interface Ethernet1/48
  switchport mode trunk
  switchport trunk allowed vlan 41
  service-policy type qos input qos-match-trafic


Using iperf from host B to host A (udp, bandwidth = 110Mb/s), I get 50 mb/s
of trafic on host A. Normal.

Trying to ping from host B to host A : ping -Q 0xB8 192.168.41.1 : about
50% loss. Here's my problem.

show policy-map on ingress interface show that my icmp trafic is classified
and attached to qos-group 1 :

N3K-eqx-pa3-1# show policy-map interface ethernet 1/48 type qos

Global statistics status :   enabled
Global QoS policy statistics status :   enabled

Ethernet1/48

  Service-policy (qos) input:   qos-match-trafic
SNMP Policy Index:  285213088

Class-map (qos):   qos-match-prio (match-all)

 Slot 1
6845 packets
 Aggregate forwarded :
6845 packets
  Match: dscp 46
  set qos-group 1

Class-map (qos):   class-default (match-any)

 Slot 1
12776799 packets
 Aggregate forwarded :
12776799 packets
  Match: any
12776799 packets
  set qos-group 0


N3K-eqx-pa3-1# show policy-map interface ethernet 1/24 type queuing

Global statistics status :   enabled
Global QoS policy statistics status :   enabled

Ethernet1/24

  Service-policy (queuing) input:   default-in-policy
SNMP Policy Index:  301990027

Class-map (queuing):   class-default (match-any)
  bandwidth percent 100
  queue dropped pkts : 0
  queue depth in bytes : 0

  Service-policy (queuing) output:   qos-queue-trafic
SNMP Policy Index:  301990282

Class-map (queuing):   qos-queue-prio (match-any)
  priority level 1
  queue dropped pkts : 0
  queue depth in bytes : 0

Class-map (queuing):   class-default (match-any)
  bandwidth percent 100
  shape kbps 5  min 0
  queue dropped pkts : 0
  queue depth in bytes : 0

I don't see any stats here ...

Thanks for your help.
Regards
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IPv6 uRPF broken on NCS5500 XR 6.2.3?

2018-02-26 Thread Mark Tinka
One of the reasons I'm not very keen on using merchant silicon for
high-touch routing.

Mark.

On 24/Feb/18 10:19, Chris Welti wrote:
> Hi David,
>
> uRPF on the NCS5500 is a mess due to limitations of the Jericho
> chipset. It has to do with the TCAM optimizations and twice the number
> of route lookups needed for uRPF (src/dst)
>
> From what I understand:
>
> On SE-models for uRPF to work you need to disable double-capacity mode
> (you will lose space for half of the routes!)
>
> hw-module tcam fib ipv4 scaledisable
>
> depending on the software you are running, you might also need to
> reserve IPv6 space in the eTCAM:
>
> hw-module profile tcam fib ipv4 unicast percent 50
> hw-module profile tcam fib ipv6 unicast percent 50
>
> For non-SE models you need to disable all the iTCAM optimizations
>
> hw-module fib ipv4 scale host-optimized-disable
> hw-module fib ipv6 scale internet-optimized-disable
>
> Unfortunately, that way the current full table won't fit anymore in
> non-SE models.
>
> IMHO it's best not to use uRPF at all on this platform.
>
> See also bugID CSCvf44418, and the excellent Cisco Live presentation
> "NCS5500: Deepdive in the Merchant Silicon High-end SP Routers -
> BRKSPG-2900" from Nicolas Fevrier. Make sure you get the latest one
> from Barcelona 2018, which includes details about uRPF.
>
> Regards,
> Chris
>
> Am 23.02.18 um 22:58 schrieb David Hubbard:
>> Hi all, curious if anyone has run into issues with IPv6 uRPF on
>> NCS5500 and/or XR 6.2.3?  I have an interface where I added:
>>
>> Ipv4 verify unicast source reachable-via any
>> ipv6 verify unicast source reachable-via any
>>
>> and immediately lost my ability to talk to a BGP peer connected to it
>> using a local /126 range; no ping, tcp, etc.  There’s obviously a
>> route in FIB given it’s connected and up, but I did check.  The same
>> issue does not occur with the remote IPv4 peering address on a /30
>> net, suggesting uRPF for ipv4 doesn’t have the same bug.
>>
>> Thanks
>>
>>
>> ___
>> cisco-nsp mailing list  cisco-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/