Re: [c-nsp] 6500/7600 TCAM Usage

2016-06-02 Thread Patrick M. Hausen
Good morning,

interesting read. Of course running on SUP-720 based
gear we are fully aware of the issue. When the DFZ
was about to hit 500k IPv4 prefixes we limited the AS
path length and currently receive default routes from our peers.

Now that we are planning to replace our supervisor engines
(3BXL) with VSS capable ones (10G-3CXL) I'm pondering
to repartition TCAM for 768k IPv4 and 128k IPv6 and
to go back to full tables.
Of course monitoring the usage closely. ;-)

I'm not asking for a time estimate when we will hit that
limit. DFZ is at slightly over 600k v4 and about 30k v6,
currently. And predictions are difficult, especially about
the future.

What puzzles me is: how do vendors go about that in
the long run? I have been using my search engine of
least distrust to no avail. Which platforms offer vastly
bigger TCAMs, like at least twofold, better an order
of magnitude?

With RIRs handing out ever smaller prefixes I expect
the IPv4 address space fragmentation to accelerate.

Or can one get around those rather arbitrary hard limits
completely? Is it possible to e.g. have a TCAM with timestamps
associated to entries, so one can employ a TCAM as
a route cache in LRU fashion and process-switch everything
new/unknown?

As I said I was searching for some general information
on the topic but all I found were blog entries on the
precise problem we face with the 6500 platform.
What's next?

I did not yet take the time to browse individual datasheets
of gear that is supposedly "bigger" than a 65k.

Some pointers would be most welcome.

Thanks,
Patrick
-- 
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] ip virtual-reassembly drop-fragments

2016-06-02 Thread Juergen Marenda
Ok, i found a document stating that "ip virtual..." is good for DDOS
prevention 
http://blog.ine.com/2008/11/05/dealing-with-fragmented-traffic/
and does not help in reassembling in memory-efficient way 
what I learned from reading Cisco-doc when I first saw that command
appearing on my router's configs.
May be that this is evolution of functionality.

Nevertheless, 
having it active on route-only routers (without "drop-fragments")
does have (massive) negative impact on the traffic between 
(for example) the firwalls behind those routers
using ipsec-tunnels (sending ip/esp packets, often fragmented )
(PMTU does not help since that is no ip/tcp traffic).

Seeing this (also in setups with no connection to the internet so "DDOS" is
not there) 
brought me to the recommendation to disable that feature.

Sorry for any confusion I may have created,

Juergen.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ip virtual-reassembly drop-fragments

2016-06-02 Thread Juergen Marenda
Reassembling ipv4 pakets inside the router is only needed for fragmented
packets with the router as destination; in the ipv4 world, the target host
is responsible for reassembling the fragmented pakets,
even when this happens on a router between an not on the source host.

(for example, if an ipsec encapsulated packet got too big with the
additional infos, the destination router which will de-ipsec it must first
reassemble it. (global settable ipsec behavior)
On GRE-Tunnels, the ip fragments will be delivered to the destination host,
which must reassemble it.

To help with fragment-ddos, configuring a mechanism not involed will not
help;
so you may want to use ACLs or the IOS firewall. See for example 

http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-g
re/8014-acl-wp.html

(not special for GRE even when the name suggests it)

Juergen.

-Ursprüngliche Nachricht-
Von: Satish Patel [mailto:satish@gmail.com] 
Gesendet: Freitag, 3. Juni 2016 03:01
An: c...@marenda.net
Cc: Nick Hilliard; Cisco Network Service Providers
Betreff: Re: AW: [c-nsp] ip virtual-reassembly drop-fragments

Sorry typo it was "Internet"

We are getting many IP fragment DDoS so I was planning to use on outside
interface to drop all IP fragmented packet. 

--
Sent from my iPhone

> On Jun 2, 2016, at 10:44 AM, Juergen Marenda  wrote:
> 
> 
> Satish Patel wrote:
>> is it safe to put on internap facing interface?
>> 
>> ip virtual-reassembly drop-fragments
> 
> what's an "internap"?
> 
> s/ap/et/
> 
> Yes it is safe, but
> 
> "no ip virtual-reassembly"
> is the best thing you can do, on every interface, and look form time 
> to time and after reloads weather it reappears.
> 
> "virtual-reassembly" should "reassembly" fragments (in a special, 
> memory conserving way) So dropping fragments in that context must be 
> an april's first joke.
> 
> Having too few resources,
> the theoretically good idea behind "virtual-reassembly" does not work 
> very well (in practice) esp. when it should be usefull.
> 
> Using the "no" form on every interface where it appears automagically 
> When you configure nat, crypto, ... did help us to solve many problems.
> 
> Juergen.
> 
> 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ASR920 - Any "outstanding" TAC cases people are working through?

2016-06-02 Thread Mark Tinka


On 2/Jun/16 21:41, Erik Sundberg wrote:

> We have been using ASR920's for a couple months now.
>
> I have an outstanding Memory leak issue in 
> asr920-universalk9_npe.03.16.01a.S.155-3.S1a-ext.bin
>
> https://bst.cloudapps.cisco.com/bugsearch/bug/CSCuy87268
>
> The work around doesn't work for me, I just tried it. TAC gave me the 
> following. We have to reboot the ASR920 to lower the memory back down.
>
> The fix for CSCuy87268 has been committed and would be available in the 
> following releases:
>
> XE 3.16.4S/15.5(3)S4 - planned for September 2016.
> XE 3.18.S1/15.6(2)S1 - planned for mid June 2016.

Have yo considered 3.16(2a)S meanwhile? We're running fine on those and
haven't had memory leak issues.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ip virtual-reassembly drop-fragments

2016-06-02 Thread Randy via cisco-nsp
--- Begin Message ---
How did you determine that this was a fragment-DDos as opposed to 
valid-fragmentation coupled with packets arriving out-of-order?



Be careful when you deny fragments because you could very well deny 
legitimate-traffic, unless of course your $Employer has a policy *not* to 
accept fragmented-packets regardless of validity.

./Randy

- Original Message -
From: Satish Patel 
To: "" 
Cc: Cisco Network Service Providers 
Sent: Thursday, June 2, 2016 6:00 PM
Subject: Re: [c-nsp] ip virtual-reassembly drop-fragments

Sorry typo it was "Internet"

We are getting many IP fragment DDoS so I was planning to use on outside 
interface to drop all IP fragmented packet. 

--
Sent from my iPhone

> On Jun 2, 2016, at 10:44 AM, Juergen Marenda  wrote:
> 
> 
> Satish Patel wrote:
>> is it safe to put on internap facing interface?
>> 
>> ip virtual-reassembly drop-fragments
> 
> what's an "internap"?
> 
> s/ap/et/
> 
> Yes it is safe, but
> 
> "no ip virtual-reassembly"
> is the best thing you can do, on every interface, 
> and look form time to time and after reloads weather it reappears.
> 
> "virtual-reassembly" should "reassembly" fragments (in a special, memory
> conserving way)
> So dropping fragments in that context must be an april's first joke.
> 
> Having too few resources, 
> the theoretically good idea behind "virtual-reassembly" does not work very
> well (in practice) 
> esp. when it should be usefull.
> 
> Using the "no" form on every interface where it appears automagically
> When you configure nat, crypto, ... did help us to solve many problems.
> 
> Juergen.
> 
> 
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
--- End Message ---
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] ip virtual-reassembly drop-fragments

2016-06-02 Thread Satish Patel
Sorry typo it was "Internet"

We are getting many IP fragment DDoS so I was planning to use on outside 
interface to drop all IP fragmented packet. 

--
Sent from my iPhone

> On Jun 2, 2016, at 10:44 AM, Juergen Marenda  wrote:
> 
> 
> Satish Patel wrote:
>> is it safe to put on internap facing interface?
>> 
>> ip virtual-reassembly drop-fragments
> 
> what's an "internap"?
> 
> s/ap/et/
> 
> Yes it is safe, but
> 
> "no ip virtual-reassembly"
> is the best thing you can do, on every interface, 
> and look form time to time and after reloads weather it reappears.
> 
> "virtual-reassembly" should "reassembly" fragments (in a special, memory
> conserving way)
> So dropping fragments in that context must be an april's first joke.
> 
> Having too few resources, 
> the theoretically good idea behind "virtual-reassembly" does not work very
> well (in practice) 
> esp. when it should be usefull.
> 
> Using the "no" form on every interface where it appears automagically
> When you configure nat, crypto, ... did help us to solve many problems.
> 
> Juergen.
> 
> 
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ASR920 - Any "outstanding" TAC cases people are working through?

2016-06-02 Thread Erik Sundberg
We have been using ASR920's for a couple months now.

I have an outstanding Memory leak issue in 
asr920-universalk9_npe.03.16.01a.S.155-3.S1a-ext.bin

https://bst.cloudapps.cisco.com/bugsearch/bug/CSCuy87268

The work around doesn't work for me, I just tried it. TAC gave me the 
following. We have to reboot the ASR920 to lower the memory back down.

The fix for CSCuy87268 has been committed and would be available in the 
following releases:

XE 3.16.4S/15.5(3)S4 - planned for September 2016.
XE 3.18.S1/15.6(2)S1 - planned for mid June 2016.



Issue RP0 memory usage at 98% and growing, status is in the warning state. I 
had to reboot the device to get the memory back down, however it still grows.


Model: ASR-920-24SZ-M

ASR920 - #1
98% used memory and uptime is 24 weeks.
ASR920#sh platform software status control-processor bri
Load Average
 Slot  Status  1-Min  5-Min 15-Min
  RP0 Healthy   0.04   0.07   0.04

Memory (kB)
 Slot  StatusTotal Used (Pct) Free (Pct) Committed (Pct)
  RP0 Warning  3438048  3361672 (98%)76376 ( 2%)   3345388 (97%)

CPU Utilization
 Slot  CPU   User System   Nice   IdleIRQ   SIRQ IOwait
  RP00  11.93   7.22   0.00  80.64   0.00   0.20   0.00
 1   9.60   8.30   0.00  81.78   0.00   0.30   0.00


ASR920 - #2 Before Reboot (Uptime around 10 Weeks)

ASR920#sh platform software status control-processor bri
Load Average
  Slot  Status  1-Min  5-Min 15-Min
   RP0 Healthy   0.00   0.00   0.00

Memory (kB)
  Slot  StatusTotal Used (Pct) Free (Pct) Committed (Pct)
  RP0 Warning  3438048  3360436 (98%)77612 ( 2%)   3341328 (97%)

CPU Utilization
  Slot  CPU   User System   Nice   IdleIRQ   SIRQ IOwait
   RP00   6.60   3.70   0.00  89.38   0.00   0.30   0.00
1   1.40   0.80   0.00  97.80   0.00   0.00   0.00


ASR920 - #2 Post Reboot

EAR1.ATL1#sh platform software status control-processor bri
Load Average
 Slot  Status  1-Min  5-Min 15-Min
  RP0 Healthy   0.00   0.02   0.00

Memory (kB)
 Slot  StatusTotal Used (Pct) Free (Pct) Committed (Pct)
  RP0 Healthy  3438048  1855832 (54%)  1582216 (46%)   1512524 (44%)

CPU Utilization
 Slot  CPU   User System   Nice   IdleIRQ   SIRQ IOwait
  RP00  10.68  10.88   0.00  78.02   0.00   0.39   0.00
 1   6.29   6.49   0.00  86.91   0.00   0.29   0.00




The memory leaks in the process SPA_XCVR_OIR and  enqueue_oir_msg  from what 
tac has told me.

EAR1.ATL1#  show platform software memory iomd 0/0 brief
  module  allocated requested allocsfrees
  --
  DEVOBJ  38496 37792 880
  IOMd intr   1392  1104  360
  Summary 1198313647679413327 496656324 431793784
  appsess_ctx 1736  1728  1 0
  appsess_timer   56403 1
  bsess_hdl   40321 0
  cdh-shim7260  5940  165   0
  cdllib  1668  1660  7 6
  chunk   5573  5517  7 0
  enqueue_oir_msg 506580480 253290240 49236333  17575053
  env_wheel   20108 20100 1 0
  eventutil   310794307834386   16
  fpd_sb  208   200   1 0
  fpd_upg 5636  5548  110
  geim_esb5040  4816  280
  geim_hwidb  16352 16128 280
  geim_instance   16800 16576 280
  geim_spa_instance   13440 13216 280
  geim_spa_plugin 764   756   2 1
  ipc_shim_pak0 0 17057467  17057467
  null_spa_plugin 104   961 0
  oir_create  80721 0
  oir_enqueue_event   0 0 1 1
  oir_processing  24161 0
  queue   240   200   5 0
  spa_bay_array   128   964 0
  spa_bay_create  120   112   1 0
  spa_env_enq 0 0 7 7
  spa_env_subsys_init 512   384   160
  spa_oir_psm 264   240   2825
  spa_plugin  672   480   240
  spa_tdl_alloc   0 0 343085676 343085676
  spa_xcvr_oir691264468 425661956 50775367  17575053
  

Re: [c-nsp] 6500/7600 TCAM Usage

2016-06-02 Thread Saku Ytti
On 2 June 2016 at 18:32, Chris Welti  wrote:
>> You can configure 'freeze', 'reset', or 'recover' for exception
>> action. TAC didn't know what 'recover' does and that option is not
>> available on newer kit.
>
>
> Where can you set that option for FIB exceptions?

'mls cef error action reset' (or freeze, or recover which probably
does not do anything).

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6509 weird pps value

2016-06-02 Thread Saku Ytti
On 2 June 2016 at 17:43, Ben Hammadi, Kayssar (Nokia - TN/Tunis)
 wrote:

Hey,

>On the 6708 linecard The fabric asic has 20G and these combine two 
> pairs: 1,4,5,7 and 2,4,6,8 Traffic between ports in these groups does not go 
> over the fabric and is not counted against that BW, what about traffic 
> between groups exemple from port 1 to port 2  in same linecard does go over 
> the fabric ?

Exactly. Inside fabric channel they use local switching and never
leave linecard. Between fabric channels (Same card or not) they use
fabric.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6500/7600 TCAM Usage

2016-06-02 Thread Chris Welti

On 01/06/16 18:28, Saku Ytti wrote:

On 1 June 2016 at 12:40, Phil Mayers  wrote:

That was always the documented behaviour on sup720. I never got an
explanation when I asked as to why it was irreversible.


You can configure 'freeze', 'reset', or 'recover' for exception
action. TAC didn't know what 'recover' does and that option is not
available on newer kit.


Where can you set that option for FIB exceptions?

Regards,
Chris
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6500/7600 TCAM Usage

2016-06-02 Thread Chris Welti

On 01/06/16 16:28, Marian Ďurkovič wrote:

On Wed, Jun 01, 2016 at 11:03:05AM +0200, Chris Welti wrote:

On 01/06/16 10:24, Mikael Abrahamsson wrote:

On Tue, 31 May 2016, Pete Templin wrote:


+1 on what Gert said. You'll get log entries at the 90% threshold within
a region, but the badness only happens when you tickle the 100%
threshold.


In my 5 year old experience, the badness would continue even if you
removed some routes and TCAM usage dropped to (let's say) 95% again. The
problem would only be solved by reboot. Is this still the case?



Yes, even on the latest releases. Once you reach "FIB exception state =
TRUE",
no more new prefix installations possible in the TCAM until a reboot.
That also applies to Sup2T btw.


Hmm, in quite old SXF16 we've received the following message a few times:

Jan 12 16:44:53.087: %CONST_V6-SP-5-FIB_EXCEP_OFF: Protocol IPv6 recovered from 
FIB exception

Is this no longer working?


   Thanks,

   M.



Interesting. I guess that would call for a new evaluation in the lab...
I guess that is Sup720?

Chris

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] ISR4431 memory usage

2016-06-02 Thread Juergen Marenda
Have several ISR4431 with minimum two full tables (but no default),
without problems, migrated from 7201 and [23]8xx'er 

(but memory-eater "soft-reconfiguration" is no longer in use)

Juergen.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ip virtual-reassembly drop-fragments

2016-06-02 Thread Juergen Marenda

Satish Patel wrote:
> is it safe to put on internap facing interface?
> 
> ip virtual-reassembly drop-fragments

what's an "internap"?

s/ap/et/

Yes it is safe, but

"no ip virtual-reassembly"
is the best thing you can do, on every interface, 
and look form time to time and after reloads weather it reappears.

"virtual-reassembly" should "reassembly" fragments (in a special, memory
conserving way)
So dropping fragments in that context must be an april's first joke.

Having too few resources, 
the theoretically good idea behind "virtual-reassembly" does not work very
well (in practice) 
esp. when it should be usefull.

Using the "no" form on every interface where it appears automagically
When you configure nat, crypto, ... did help us to solve many problems.

Juergen.
 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6509 weird pps value

2016-06-02 Thread Ben Hammadi, Kayssar (Nokia - TN/Tunis)
Hi Saku, 

   On the 6708 linecard The fabric asic has 20G and these combine two 
pairs: 1,4,5,7 and 2,4,6,8 Traffic between ports in these groups does not go 
over the fabric and is not counted against that BW, what about traffic between 
groups exemple from port 1 to port 2  in same linecard does go over the fabric 
? 

Br.

  KAYSSAR BEN HAMMADI
  IP Technical Manager
  CCIE (#48406), JNCIE-M (#471), JNCIE-SP (#1147)
  Mobile :  +216 29 349 952  /  +216 98 349 952
   
  



-Original Message-
From: Saku Ytti [mailto:s...@ytti.fi] 
Sent: Thursday, June 02, 2016 9:46 AM
To: Ben Hammadi, Kayssar (Nokia - TN/Tunis) 
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] 6509 weird pps value

On 2 June 2016 at 11:41, Ben Hammadi, Kayssar (Nokia - TN/Tunis)
 wrote:
>   The amount of Traffic that cross the fabric can be seen via " show 
> fabric utilization "  , what about the local switched traffic inside the same 
> channel , how we can check the amount ?

To my knowledge there is no command.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] ASR9k Bundle QoS in 6.0.1

2016-06-02 Thread Saku Ytti
On 2 June 2016 at 15:55, Adam Vitkovsky  wrote:

Hey,

> Arguably there are some benefits in using central arbiter but I'm not really
> convinced.

I don't believe this how ASR9k works. Considering I have no idea how
it works, it's pretty bold statement. I'm fit for management.

> In case of a central arbiter there are out of band links to it. So the
> requests are sent only to two arbiters (primary/backup).

But for what purpose. What can we possibly gain? Why does fabric1 need
to know I'm going to use capacity of fabric2? Signalling this
information has non-trivial cost.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ip virtual-reassembly drop-fragments

2016-06-02 Thread Nick Hilliard
Satish Patel wrote:
> is it safe to put on internap facing interface?
> 
> ip virtual-reassembly drop-fragments

what's an "internap"?

Nick
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ASR9k Bundle QoS in 6.0.1

2016-06-02 Thread Adam Vitkovsky
> Saku Ytti [mailto:s...@ytti.fi]
> Sent: Wednesday, June 01, 2016 12:50 PM
>
> On 1 June 2016 at 12:45, Adam Vitkovsky 
> wrote:
> > Got a confirmation from Xander, so in summary:
> > Each RP has 2 fabric chips, but both controlled by one common RSP Arbiter.
> > There are two RP cards in the chassis - so two Arbiters.
> > Only the Arbiter on the active RP controls all 4 fabrics.
> > However both Arbiters are getting the Fabric access requests in order
> > to know exactly the whole state of the system at any given time so the
> > failover can be instantenious.
> > There is no keepalive between the Arbiters but the RP's have a "CPLD"
> > ASIC and one of its functions is to track the other RP's state via low
> > level keepalives.
>
> What about say 9922 with lot of fabrics? It seem completely odd to me why
> you'd want to even design centralised arbiter, instead of per fabric arbiter.
> Each linecard connects to all fabrics and balance between fabrics. I don't see
> why fabric1 would care about fabric2's situation.
>
It's not like fabric1 cares about fabric2.
It's merely about whether the FIAs exchange fabric access request and grants 
directly between each other via in-band (crossbar links) in which case each 
egress FIA would be responsible for arbitrating access to its resources 
(Juniper MX).
Or whether FIAs relay these messages via out of band links through primary and 
backup central arbiter.

Arguably there are some benefits in using central arbiter but I'm not really 
convinced.

The 9922 and 9912 are different beasts as they use 3-stage midplane design much 
like CRS.
And my understanding is that they don't use central arbiter approach .
I assume they use same/similar concepts as J-MX or C-CRS, I'm not sure.
So for what it's worth they may not even use VOQs and arbitration and might as 
well use egress speedup, output buffers and explicit backpressure like CRS.
I don't know.

> What you're saying does not really make sense to me, it implies that fabric
> request to fabric1 is sprayed to all fabrics in the system, why? What did we
> gain? We're multiplying the request volume by the number of fabrics we
> have, limiting how many fabrics we can insert.
>
In case of a central arbiter there are out of band links to it. So the requests 
are sent only to two arbiters (primary/backup).


adam











Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
---
 This email has been scanned for email related threats and delivered safely by 
Mimecast.
 For more information please visit http://www.mimecast.com
---
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 6509 weird pps value

2016-06-02 Thread Saku Ytti
On 2 June 2016 at 11:41, Ben Hammadi, Kayssar (Nokia - TN/Tunis)
 wrote:
>   The amount of Traffic that cross the fabric can be seen via " show 
> fabric utilization "  , what about the local switched traffic inside the same 
> channel , how we can check the amount ?

To my knowledge there is no command.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6509 weird pps value

2016-06-02 Thread Ben Hammadi, Kayssar (Nokia - TN/Tunis)
Thanks , 

  The amount of Traffic that cross the fabric can be seen via " show fabric 
utilization "  , what about the local switched traffic inside the same channel 
, how we can check the amount ? 

Br.

  KAYSSAR BEN HAMMADI
  IP Technical Manager
  CCIE (#48406), JNCIE-M (#471), JNCIE-SP (#1147)
  Mobile :  +216 29 349 952  /  +216 98 349 952

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 6509 weird pps value

2016-06-02 Thread Saku Ytti
On 2 June 2016 at 11:24, Ben Hammadi, Kayssar (Nokia - TN/Tunis)
 wrote:

Hey,

> In case of DFC equipped module and the traffic is between two ports in 
> same pair ( in same FPGA) , is this traffic seen on the " show fabric 
> utilization "  ? , if no do you have an idea how we can check the load of the 
> FPGA ?

If traffic is inside same fabric channel (you can view which ports
share fabric channel), then there is local-switching, and packets
won't be seen in fabric at all.

When traffic is not inside same fabric channel, as long as you have
fabric cards, traffic is seen in fabric. Full packets only ever travel
DBUS if you have non-fabric cards. Anyone sensible runs chassis with
full fabric cards only.

DFC/CFC both are fabric cards, it's just difference about lookup, if
the lookup engine is locally on the linecard or if lookup engine is in
the SUP.
-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6509 weird pps value

2016-06-02 Thread Ben Hammadi, Kayssar (Nokia - TN/Tunis)
Thanks Saku, 

In case of DFC equipped module and the traffic is between two ports in same 
pair ( in same FPGA) , is this traffic seen on the " show fabric utilization "  
? , if no do you have an idea how we can check the load of the FPGA ? 

Br.

  KAYSSAR BEN HAMMADI
  IP Technical Manager
  CCIE (#48406), JNCIE-M (#471), JNCIE-SP (#1147)
  Mobile :  +216 29 349 952  /  +216 98 349 952

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/