Thanks for the overwhelming wealth of information. So sounds like the ideal, supported setup would be a MX960 with 3 SBC, 2 RE-S-2000-4096, and redundant power supplies. For a recommended option, we would have a third RE-S-2000-4096 as a spare (well really we should have a spare for everything), and install this spare in the 3 SBC slot to just hold the spare (it would not be powered or do anything, just be there for a swap if one of the 2 installed RE's failed instead of sitting on a shelf somewhere).
Are there any feature limitations I should be aware of using a MX960 with the above configuration and older I beleive EOL'd DPC-R-4XGE-XFP linecards? For comparison's sake, I am upgrading from a MX80 using the 4 10G ports on the front of the MX80. The RE-S-2000-4096 is more powerful than the RE built into the MX80 (and MX104), and of course the MX960 has more redundancy. Just wondering on the software and feature-set side if I am loosing anything assuming they are running on same JUNOS version. Any gotchas to be concerned about using RE-S-2000-4096, regular SBCs, and DPC-R-4XGE-XFPs line-cards? We are pushing less than 80Gbps, so I the back plane capacity is of little concern. On Wed, Jan 13, 2016 at 11:15 AM, Daniel Roesen <d...@cluenet.de> wrote: > On Wed, Jan 13, 2016 at 09:06:55AM -0800, joel jaeggli wrote: > > An mx960 has a full fabric with two SCBs. > > That depends on the specific MPCs used in the chassis. Some do get > full performance only with all 3 SCBs. > > > it is n+1 redundant with 3. e.g. at that point you can swap one > > without cutting the fabric bandwidth in half. > > Technically, with MPCs (unlike DPCs), when removing the third SCB > you're removing 1/3rd of the fabric bandwidth as all traffic is > sprayed across all forwarding planes (two per SCB). > > With DPCs, only two SCBs (thus four fabric planes) could be active, > with the third SCB being hot standby. > > Best regards, > Daniel > > -- > CLUE-RIPE -- Jabber: d...@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0 > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp > _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp