With 2 active SCBs (2+0 non-redundant or 2+1 redundant in a 960) you are 
capable of 120G
per slot.

The worst case throughput in this case would depend on the cards.

For single Trio MPC1 in would be 30Gbit per card (actual 31.7 to ~ 39 depending 
on packet mix)

For dual Trio MPC2 it would be same per half card or 60G total.

For quad trio 16XGE would be 30G per bank of 4 until use SCBE or better than 
40G per bank of 4


I do not recall the specs for those DPCs, as we downchecked all DPCs for other 
reasons and
held off deployment until the release of the MPC2-3D-Q cards.


On 1/14/2016 1:37 PM, Colton Conor wrote:
> So assuming I got with something like this
> setup: 
> http://www.ebay.com/itm/Juniper-MX960BASE-AC-10x-DPC-R-4XGE-XFP-MS-DPC-1yrWrnty-Free-Ship-40x-10G-MX960-/351615023814?hash=item51dde37ac6:g:WUcAAOSwLVZViHc~
> 
> That has RE-2000's, Regular SCBs, and all DPC linecards. Would these all be 
> able to run at
> line rate? 
> 
> On Thu, Jan 14, 2016 at 4:19 PM, Christopher E. Brown 
> <chris.br...@acsalaska.net
> <mailto:chris.br...@acsalaska.net>> wrote:
> 
> 
>     It is 120Gbit agg in a SCB system as the limit is 120G/slot.
> 
>     This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
> 
>     So, use 3 of 4 in each bank if you want 100% line rate.
> 
>     SCBE increases this to 160Gbit or better (depending on age of chassis)
>     allowing for line rate on all 16.
> 
> 
>     Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
>     with, but nobody in their right mind mixes DPCs and MPCs.
> 
>     On 1/14/16 13:09, Saku Ytti wrote:
>     > On 14 January 2016 at 23:11, Mark Tinka <mark.ti...@seacom.mu 
> <mailto:mark.ti...@seacom.mu>> wrote:
>     >
>     > Hey,
>     >
>     >>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
>     >>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
>     >>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio 
> chipset.
>     >
>     >> I wouldn't be too concerned about the throughput on this line card. If
>     >> you hit such an issue, you can easily justify the budget for better 
> line
>     >> cards.
>     >
>     > Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.
>     >
>     > Particularly bad combo would be running with DPCE. Basically any
>     > MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
>     > the fabric to redundancy mode. But with 16XGE+SCB you probably don't
>     > want to.
>     >
>     > There is some sort of policing/limit on how many fabric grands MPC
>     > gives up, like if your trio needs 40Gbps of capacity, and is operating
>     > in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
>     > each SCB.
>     > However as DPCE won't use the third one, DPCE could only send at
>     > 26.66Gbps towards that 40Gbps trio. While MPC would happily send
>     > 40Gbps to MPC.
>     >
> 
> 
>     --
>     ------------------------------------------------------------------------
>     Christopher E. Brown   <chris.br...@acsalaska.net 
> <mailto:chris.br...@acsalaska.net>> 
>      desk (907) 550-8393 <tel:%28907%29%20550-8393>
>                                                          cell (907) 632-8492
>     <tel:%28907%29%20632-8492>
>     IP Engineer - ACS
>     ------------------------------------------------------------------------
>     _______________________________________________
>     juniper-nsp mailing list juniper-nsp@puck.nether.net 
> <mailto:juniper-nsp@puck.nether.net>
>     https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> 


-- 
------------------------------------------------------------------------
Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
                                                     cell (907) 632-8492
IP Engineer - ACS
------------------------------------------------------------------------
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to