Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread Saku Ytti
On 15 January 2016 at 03:13, Christopher E. Brown
 wrote:
> When the same folks were asked about the 16XGE card and the 120G (and later 
> 160G)
> performance it was indicated that there was an additional layer of 
> logic/asics used to tie
> all 4 trios in the 16XGE to the bus and that these ASICs offloaded some of 
> the bus related
> overhead handling from the TRIOs, freeing up enough capacity to allow each 
> TRIO in the
> 16XGE to provide a full 40G duplex after jcell/etc overhead.


Sorry Christopher for being suspicious, but I think you must have done
some mistake in your testing.

Only difference that I can think of, on top of the multicast
replication, is that 16XGE does not have TCAM. But that does not
matter, as the TCAM isn't used for anything in MPC1/MPC2, it's just
sitting there.
MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
If you were testing with QX enabled, then it's wholly different thing.
QX was never dimensioned to push all traffic in every port via QX,
it's very very much underdimensioned for this. If MQ can do maybe
~70Gbps memory BW, QX can't do anywhere near 40Gbps. So if you enable
QX or ingress+egress, you're gonna have very very limited performance.

I'm gonna need much more concrete data to believe 16XGE has higher
memory bandwidth or lookup performance than MPC2. If this information
is available to you, or if you can easily find it, I'd really really
appreciate it. This would be new information to me, and very much
interesting, it's just that right now, all data I have, is disagrees
with that.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread Adam Vitkovsky
> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Friday, January 15, 2016 10:18 AM
> On 15 January 2016 at 03:13, Christopher E. Brown
>  wrote:
> > When the same folks were asked about the 16XGE card and the 120G (and
> > later 160G) performance it was indicated that there was an additional
> > layer of logic/asics used to tie all 4 trios in the 16XGE to the bus
> > and that these ASICs offloaded some of the bus related overhead
> > handling from the TRIOs, freeing up enough capacity to allow each TRIO in
> the 16XGE to provide a full 40G duplex after jcell/etc overhead.
>
>
> Sorry Christopher for being suspicious, but I think you must have done some
> mistake in your testing.
>
> Only difference that I can think of, on top of the multicast replication, is 
> that
> 16XGE does not have TCAM. But that does not matter, as the TCAM isn't
> used for anything in MPC1/MPC2, it's just sitting there.
> MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
> If you were testing with QX enabled, then it's wholly different thing.
> QX was never dimensioned to push all traffic in every port via QX, it's very
> very much underdimensioned for this. If MQ can do maybe ~70Gbps
> memory BW, QX can't do anywhere near 40Gbps. So if you enable QX or
> ingress+egress, you're gonna have very very limited performance.
>
Saku is right, if you do the math then 40.960Gbps is a theoretical maximum for 
QX as a whole.

adam



Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread Kevin Wormington
So what are any bandwidth limitations of an all DPCE-R system with original SCB?

Thanks,

Kevin


> On Jan 15, 2016, at 6:59 AM, Christopher E. Brown  
> wrote:
> 
> 
> The MPC2-Q is an advanced per unit queueing card and has QX, it also
> runs against the lower/original fabric rate.
> 
> The 16XGE rate is a port mode card with no QX, and it supports the first
> of the fabric speed increases.
> 
> The published capacity of the MPC2-Q is 30G per MIC with a supplied by
> Juniper actual worst-case figure of 31.7G, climbing to about 39G with
> larger frames.
> 
> This matches with my own test results and it does not change with SCB model.
> 
> 
> On 1/15/16 03:47, Adam Vitkovsky wrote:
>>> From: Saku Ytti [mailto:s...@ytti.fi]
>>> Sent: Friday, January 15, 2016 10:18 AM
>>> On 15 January 2016 at 03:13, Christopher E. Brown
>>>  wrote:
 When the same folks were asked about the 16XGE card and the 120G (and
 later 160G) performance it was indicated that there was an additional
 layer of logic/asics used to tie all 4 trios in the 16XGE to the bus
 and that these ASICs offloaded some of the bus related overhead
 handling from the TRIOs, freeing up enough capacity to allow each TRIO in
>>> the 16XGE to provide a full 40G duplex after jcell/etc overhead.
>>> 
>>> 
>>> Sorry Christopher for being suspicious, but I think you must have done some
>>> mistake in your testing.
>>> 
>>> Only difference that I can think of, on top of the multicast replication, 
>>> is that
>>> 16XGE does not have TCAM. But that does not matter, as the TCAM isn't
>>> used for anything in MPC1/MPC2, it's just sitting there.
>>> MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
>>> If you were testing with QX enabled, then it's wholly different thing.
>>> QX was never dimensioned to push all traffic in every port via QX, it's very
>>> very much underdimensioned for this. If MQ can do maybe ~70Gbps
>>> memory BW, QX can't do anywhere near 40Gbps. So if you enable QX or
>>> ingress+egress, you're gonna have very very limited performance.
>>> 
>> Saku is right, if you do the math then 40.960Gbps is a theoretical maximum 
>> for QX as a whole.
>> 
>> adam
>> 
>> 
>> 
>>Adam Vitkovsky
>>IP Engineer
>> 
>> T:  0333 006 5936
>> E:  adam.vitkov...@gamma.co.uk
>> W:  www.gamma.co.uk
>> 
>> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
>> this email are confidential to the ordinary user of the email address to 
>> which it was addressed. This email is not intended to create any legal 
>> relationship. No one else may place any reliance upon it, or copy or forward 
>> all or any of it in any form (unless otherwise notified). If you receive 
>> this email in error, please accept our apologies, we would be obliged if you 
>> would telephone our postmaster on +44 (0) 808 178 9652 or email 
>> postmas...@gamma.co.uk
>> 
>> Gamma Telecom Limited, a company incorporated in England and Wales, with 
>> limited liability, with registered number 04340834, and whose registered 
>> office is at 5 Fleet Place London EC4M 7RD and whose principal place of 
>> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
>> 
>> 
>> 
> 
> 
> -- 
> 
> Christopher E. Brown      desk (907) 550-8393
> cell (907) 632-8492
> IP Engineer - ACS
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread Saku Ytti
None. You can do 40Gbps, with multicast, on each linecard.

On 15 January 2016 at 15:24, Kevin Wormington  wrote:
> So what are any bandwidth limitations of an all DPCE-R system with original 
> SCB?
>
> Thanks,
>
> Kevin
>
>
>> On Jan 15, 2016, at 6:59 AM, Christopher E. Brown 
>>  wrote:
>>
>>
>> The MPC2-Q is an advanced per unit queueing card and has QX, it also
>> runs against the lower/original fabric rate.
>>
>> The 16XGE rate is a port mode card with no QX, and it supports the first
>> of the fabric speed increases.
>>
>> The published capacity of the MPC2-Q is 30G per MIC with a supplied by
>> Juniper actual worst-case figure of 31.7G, climbing to about 39G with
>> larger frames.
>>
>> This matches with my own test results and it does not change with SCB model.
>>
>>
>> On 1/15/16 03:47, Adam Vitkovsky wrote:
 From: Saku Ytti [mailto:s...@ytti.fi]
 Sent: Friday, January 15, 2016 10:18 AM
 On 15 January 2016 at 03:13, Christopher E. Brown
  wrote:
> When the same folks were asked about the 16XGE card and the 120G (and
> later 160G) performance it was indicated that there was an additional
> layer of logic/asics used to tie all 4 trios in the 16XGE to the bus
> and that these ASICs offloaded some of the bus related overhead
> handling from the TRIOs, freeing up enough capacity to allow each TRIO in
 the 16XGE to provide a full 40G duplex after jcell/etc overhead.


 Sorry Christopher for being suspicious, but I think you must have done some
 mistake in your testing.

 Only difference that I can think of, on top of the multicast replication, 
 is that
 16XGE does not have TCAM. But that does not matter, as the TCAM isn't
 used for anything in MPC1/MPC2, it's just sitting there.
 MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
 If you were testing with QX enabled, then it's wholly different thing.
 QX was never dimensioned to push all traffic in every port via QX, it's 
 very
 very much underdimensioned for this. If MQ can do maybe ~70Gbps
 memory BW, QX can't do anywhere near 40Gbps. So if you enable QX or
 ingress+egress, you're gonna have very very limited performance.

>>> Saku is right, if you do the math then 40.960Gbps is a theoretical maximum 
>>> for QX as a whole.
>>>
>>> adam
>>>
>>>
>>>
>>>Adam Vitkovsky
>>>IP Engineer
>>>
>>> T:  0333 006 5936
>>> E:  adam.vitkov...@gamma.co.uk
>>> W:  www.gamma.co.uk
>>>
>>> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents 
>>> of this email are confidential to the ordinary user of the email address to 
>>> which it was addressed. This email is not intended to create any legal 
>>> relationship. No one else may place any reliance upon it, or copy or 
>>> forward all or any of it in any form (unless otherwise notified). If you 
>>> receive this email in error, please accept our apologies, we would be 
>>> obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or 
>>> email postmas...@gamma.co.uk
>>>
>>> Gamma Telecom Limited, a company incorporated in England and Wales, with 
>>> limited liability, with registered number 04340834, and whose registered 
>>> office is at 5 Fleet Place London EC4M 7RD and whose principal place of 
>>> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
>>>
>>>
>>>
>>
>>
>> --
>> 
>> Christopher E. Brown      desk (907) 550-8393
>> cell (907) 632-8492
>> IP Engineer - ACS
>> 
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread Christopher E. Brown

The MPC2-Q is an advanced per unit queueing card and has QX, it also
runs against the lower/original fabric rate.

The 16XGE rate is a port mode card with no QX, and it supports the first
of the fabric speed increases.

The published capacity of the MPC2-Q is 30G per MIC with a supplied by
Juniper actual worst-case figure of 31.7G, climbing to about 39G with
larger frames.

This matches with my own test results and it does not change with SCB model.


On 1/15/16 03:47, Adam Vitkovsky wrote:
>> From: Saku Ytti [mailto:s...@ytti.fi]
>> Sent: Friday, January 15, 2016 10:18 AM
>> On 15 January 2016 at 03:13, Christopher E. Brown
>>  wrote:
>>> When the same folks were asked about the 16XGE card and the 120G (and
>>> later 160G) performance it was indicated that there was an additional
>>> layer of logic/asics used to tie all 4 trios in the 16XGE to the bus
>>> and that these ASICs offloaded some of the bus related overhead
>>> handling from the TRIOs, freeing up enough capacity to allow each TRIO in
>> the 16XGE to provide a full 40G duplex after jcell/etc overhead.
>>
>>
>> Sorry Christopher for being suspicious, but I think you must have done some
>> mistake in your testing.
>>
>> Only difference that I can think of, on top of the multicast replication, is 
>> that
>> 16XGE does not have TCAM. But that does not matter, as the TCAM isn't
>> used for anything in MPC1/MPC2, it's just sitting there.
>> MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
>> If you were testing with QX enabled, then it's wholly different thing.
>> QX was never dimensioned to push all traffic in every port via QX, it's very
>> very much underdimensioned for this. If MQ can do maybe ~70Gbps
>> memory BW, QX can't do anywhere near 40Gbps. So if you enable QX or
>> ingress+egress, you're gonna have very very limited performance.
>>
> Saku is right, if you do the math then 40.960Gbps is a theoretical maximum 
> for QX as a whole.
> 
> adam
> 
> 
> 
> Adam Vitkovsky
> IP Engineer
> 
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
> 
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
> this email are confidential to the ordinary user of the email address to 
> which it was addressed. This email is not intended to create any legal 
> relationship. No one else may place any reliance upon it, or copy or forward 
> all or any of it in any form (unless otherwise notified). If you receive this 
> email in error, please accept our apologies, we would be obliged if you would 
> telephone our postmaster on +44 (0) 808 178 9652 or email 
> postmas...@gamma.co.uk
> 
> Gamma Telecom Limited, a company incorporated in England and Wales, with 
> limited liability, with registered number 04340834, and whose registered 
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of 
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
> 
> 
> 


-- 

Christopher E. Brown      desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread sthaug
> > I did not have gear for a 16XGE full test, but after SCBE upgrade I did
> > specifically test at full duplex line rate on all 4 ports in a bank on one 
> > card to
> > all 4 ports in a bank on a second card in the same chassis.
> >
> Alright, that's good news.
> And there's no QX chip on these right?

Correct.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Colton Conor
Mark,

Thanks for the information.What is the smallest non-DPC 10G card you would
recommend? I am probably going to have a hard to getting away with DPC due
to the cost. A 4 port 10G DPC card can be had for under $1K on the used
market.

On Wed, Jan 13, 2016 at 11:45 PM, Mark Tinka  wrote:

>
>
> On 14/Jan/16 02:07, Colton Conor wrote:
>
> Well it would be RE-S-2000-4096 running the JTAC Recommended Junos
> Software Version Junos 13.3R8 plus the standard (not enhanced SCBs).
>
> I know more memory and 64 bit is usually better, but how does this help in
> Junos? From past threads, we have concluded that Junos is currently single
> thread/core in most all situations, and the RE-S-2000-4096 is faster than
> the RE in a MX80 and MX104. What does the more cores and quadruple memory
> get you in the RE-S-1800X4-16G that you can not do on a RE-S-2000-4096?
>
> The use case for this box would be full BGP tables and routing with 4+
> providers on 10G ports, plus a couple of ports to a peering exchange.
>
>
> As others running the RE-S-2000 have confirmed, you can run a recent Junos
> release on that RE today, which is great.
>
> The 64-bit RE gives you more memory to hold more routes, but if you only
> need 4x full BGP feeds today, the RE-S-2000 should be fine. Naturally, the
> newer RE will provide longer-term support for later Junos releases
> (especially with the architectural differences between Junos 15 and
> anything else before it). But in your case, the RE-S-2000 should be just
> fine.
>
> I am wondering what features the DPC's lack in this situation.
>
>
> - Lots of QoS limitations on the DPC compared to Trio.
> - Multicast restrictions on the DPC vs. the Trio.
> - No support for inline jflow on the DPC.
> - Differences in Tunnel PIC support on the DPC vs. the Trio (Trio is
> more flexible).
> - There may also be differences in Carrier Ethernet capabilities.
>
> I think you can get away with the RE-S-2000, but if you can, stay away
> from the DPC, just for peace of mind.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Colton Conor
Thanks for the replies, but I am not interested in the 16XGE card. The
configuration I am referencing would be this:
http://www.ebay.com/itm/Juniper-MX960BASE-AC-10x-DPC-R-4XGE-XFP-MS-DPC-1yrWrnty-Free-Ship-40x-10G-MX960-/351615023814?hash=item51dde37ac6:g:WUcAAOSwLVZViHc~

Should I run into any problems with that configuration? I believe
everything would run at line rate.

On Thu, Jan 14, 2016 at 7:13 PM, Christopher E. Brown <
chris.br...@acsalaska.net> wrote:

>
> We actually spent a couple years spooling up for a greenfield build of a
> network required
> to provide very tight SLAs on a per service bases.
>
> We were working with the MX BU directly for almost 2 years and constantly
> exchanging test
> results and loaner/testing cards.
>
> Per specs provided by the designers of the MPC2 and verified by our
> testing the worst case
> throughput per trio for a MPC2-3D per or 3D-Q is 31.7G, that is full
> duplex with every
> packet going to or from bus and every packet being min length. That is
> actual packet,
> after all internal overhead.
>
> Larger packets bring this closer but not all the way to 40G, and traffic
> hairpinning on
> same trio can take all the way to 40G.
>
> We did not investigate any impact of MCAST replication on this as MCAST
> replication is not
> part of the use case for us.
>
>
> When the same folks were asked about the 16XGE card and the 120G (and
> later 160G)
> performance it was indicated that there was an additional layer of
> logic/asics used to tie
> all 4 trios in the 16XGE to the bus and that these ASICs offloaded some of
> the bus related
> overhead handling from the TRIOs, freeing up enough capacity to allow each
> TRIO in the
> 16XGE to provide a full 40G duplex after jcell/etc overhead.
>
>
> On 1/14/2016 3:27 PM, Saku Ytti wrote:
> > On 15 January 2016 at 01:39, Christopher E. Brown
> >  wrote:
> >> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to
> the MPC1 and 2 cards
> >> but the quad trio interconnect in the 16XGE is wired up diff with
> additional helpers and
> >> can do the full 40G per bank.
> >
> >
> > Pretty sure this is not true. There are three factors in play
> >
> > a) memory bandwidth of trio (mq)
> > b) lookup performance of trio (lu)
> > c) fabric capacity
> >
> > With SCBE 16XGE isn't limited by fabric, but it's still limited by
> > memory bandwidth and lookup performance.
> >
> > Memory bandwidth is tricksy, due to how packets are split into cells,
> > if you have unlucky packet sizes, you're wasting memory bandwidth
> > sending padding, and can end up having maybe even that 30Gbps
> > performance, perhaps even less.
> > If you have lucky packet sizes, and you can do 40Gbps. For me, I work
> > with half-duplex performance of 70Gbps, and that's been safe bet for
> > me, in real environments.
> >
> > Then there is lookup performance, which is like 50-55Mpps, out of
> > 60Mpps of maximum possible.
> >
> >
> >
> > Why DPCE=>MPC is particularly bad, is because even though MQ could
> > accept 20Gbps+20Gbps from the fabrics DPCE is sending, it's not, it's
> > only accepting 13+13, but DPCE isn't using the last fabric to send the
> > remaining 13Gbps.
> > So if your traffic flow from single DPCE card (say 4x10GE LACP) is out
> > to single trio, you'll be limited to just 26Gbps out of 40Gbps.
> >
> > If your chassis is SCBE or just MPC1/MPC2 you can 'set chassis fabric
> > chassis redundancy-mode redundant', which causes MPC to use only two
> > of the fabrics, in which case it'll accept 20+20 from DPCE, allowing
> > you to get that full 40Gbps from DPCE.
> > But this also means, if you also have 16XGE and SCB, you can pretty
> > much use just half of the ports of 16XGE.
> >
> > If this is confusing, just don't run mixed box, or buy SCBE and run
> > redundant-mode.
> >
> >
> >
>
>
> --
> 
> Christopher E. Brown      desk (907) 550-8393
>  cell (907) 632-8492
> IP Engineer - ACS
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Adam Vitkovsky
> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Friday, January 15, 2016 12:45 AM
>
> On 15 January 2016 at 01:39, Christopher E. Brown
>  wrote:
> > The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to
> > the MPC1 and 2 cards but the quad trio interconnect in the 16XGE is
> > wired up diff with additional helpers and can do the full 40G per bank.
>
> There is actually one difference in MPC1/MPC2 and 16XGE. MPC1/MPC2 do
> binary multicast replication, while 16XGE does unary.
I didn't know there's even worse performance than the binary multicast 
replication (glad they did not do the head-end replication)

>
> So if your box is full of 16X10GE, multicast stream hops from NPU to NPU, so
> every NPU gets the stream replicated at different time (i know stock
> exchanges running MX and masturbating over nanoseconds in switching
> latency, while they are doing three orders of magnitude larger jitters in MX
> replication).
> While if box is full of MPC1/MPC2, each card or NPU, will send copy to two
> other NPU, for dramatically more efficient replication. (Of course the
> theoretically correct way is to replicate in fabric, but I'm not paying one 
> cent
> for the right solution for this feature which should be just checkbox item at
> best)
>
Well certainly not if you're doing IPTV on a fully loaded box where you just 
overload the Fabric or stock exchange where you are obligated to deliver the 
info to everyone at the same time or HFT where nanoseconds actually do matter.

> This also means that if you run multicast, trio needs 80Gbps of fabric 
> capacity
> for those 40Gbps of WAN capacity. DPCE/ichip has this, DPCE is 40Gbps, but
> has 80Gbps of fabric capacity.
> SCBE and MPC2 has this, 80Gbps of WAN capacity and 160Gbps of fabric
> capacity.
> While 16XGE needs just same fabric capacity as WAN capacity.
>
What to say, there's a reason why Juniper is cheaper. One has to know if it 
fits his/her implementation (which is hard btw with the sparse documentation at 
hand).


adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Colton Conor
So assuming I got with something like this setup:
http://www.ebay.com/itm/Juniper-MX960BASE-AC-10x-DPC-R-4XGE-XFP-MS-DPC-1yrWrnty-Free-Ship-40x-10G-MX960-/351615023814?hash=item51dde37ac6:g:WUcAAOSwLVZViHc~

That has RE-2000's, Regular SCBs, and all DPC linecards. Would these all be
able to run at line rate?

On Thu, Jan 14, 2016 at 4:19 PM, Christopher E. Brown <
chris.br...@acsalaska.net> wrote:

>
> It is 120Gbit agg in a SCB system as the limit is 120G/slot.
>
> This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
>
> So, use 3 of 4 in each bank if you want 100% line rate.
>
> SCBE increases this to 160Gbit or better (depending on age of chassis)
> allowing for line rate on all 16.
>
>
> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
> with, but nobody in their right mind mixes DPCs and MPCs.
>
> On 1/14/16 13:09, Saku Ytti wrote:
> > On 14 January 2016 at 23:11, Mark Tinka  wrote:
> >
> > Hey,
> >
> >>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
> >>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
> >>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio
> chipset.
> >
> >> I wouldn't be too concerned about the throughput on this line card. If
> >> you hit such an issue, you can easily justify the budget for better line
> >> cards.
> >
> > Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.
> >
> > Particularly bad combo would be running with DPCE. Basically any
> > MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
> > the fabric to redundancy mode. But with 16XGE+SCB you probably don't
> > want to.
> >
> > There is some sort of policing/limit on how many fabric grands MPC
> > gives up, like if your trio needs 40Gbps of capacity, and is operating
> > in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
> > each SCB.
> > However as DPCE won't use the third one, DPCE could only send at
> > 26.66Gbps towards that 40Gbps trio. While MPC would happily send
> > 40Gbps to MPC.
> >
>
>
> --
> 
> Christopher E. Brown      desk (907) 550-8393
>  cell (907) 632-8492
> IP Engineer - ACS
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown
On 1/14/2016 1:48 PM, Jeff wrote:
> Am 14.01.2016 um 23:19 schrieb Christopher E. Brown:
>>
>>
>> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
>> with, but nobody in their right mind mixes DPCs and MPCs.
>>
> 
> Why is that? The mentioned 16x 10G card actually sounds interesting but is 
> still quite
> expensive compared to the older DPCEs with 4x 10GE so we had them in mind for 
> our growth
> and are planning to use them together with the existing DPCEs. Why wouldn't 
> you do it?
> 
> 
> Thanks,
> Jeff
> 

DPCs and MPCs have different engines with different features available.  
Running even one
DPC prevents running in enhanced mode limites features on the MPCs.  
Historically, mixing
DPCs and MPCs was also asking to get bitten by "bug of the week".


I worked with and tested all MX hardware but downchecked MX for CarrierE use 
until the
trio based MPC2-Q cards were available due to limited traffic control/queueing 
features.


I deployed with and still use the 16XGE cards.  Initial build (early 2012) was 
with
RE2000/SCB and we used 3 out of 4 10G in each bank.  The NxXGE dist boxes were 
later
re-fit with SCBE cards and opened up for 16 XGE active per card.



If you are comfy with DPCs for say normal Inet backbone use (not say CarrierE 
service
termination and similar where the MPC2/MPC-3 features can be critical) and just 
want
capacity I guess your would be fine, barring any remaining "mix bugs".  The 
16XGE is just
a basic port mode card without a whole bunch of extra features.

If on the other hand you are doing edge feeding with hundreds to thousands of 
units per
card, lots of filters, etc...  Mixing MPC and DPC is likely still a very bad 
idea.

-- 

Christopher E. Brown      desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Adam Vitkovsky


> From: Christopher E. Brown [mailto:chris.br...@acsalaska.net]
> Sent: Thursday, January 14, 2016 11:39 PM
>
>
> In the datasheets for the 16XGE since release Juniper specifically called out
> 120G due to bus limit with SCB.
>
> Not in the sheet but actual is guaranteed 30G per bank of 4.
>
> We used first 4 of each bank in heavy core use and behavior is as specified.
>
> With change to SCBE this limit is removed and _all_ 16 ports are line rate
> capable at same time.
>
> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the MPC1
> and 2 cards but the quad trio interconnect in the 16XGE is wired up diff with
> additional helpers and can do the full 40G per bank.
>
> There is no increase in usable capacity with the MPC1-3D and MPC2-3D cards
> with the SCBE though, still 30G.
>
>
> I did not have gear for a 16XGE full test, but after SCBE upgrade I did
> specifically test at full duplex line rate on all 4 ports in a bank on one 
> card to
> all 4 ports in a bank on a second card in the same chassis.
>
Alright, that's good news.
And there's no QX chip on these right?

adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

It is 120Gbit agg in a SCB system as the limit is 120G/slot.

This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.

So, use 3 of 4 in each bank if you want 100% line rate.

SCBE increases this to 160Gbit or better (depending on age of chassis)
allowing for line rate on all 16.


Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
with, but nobody in their right mind mixes DPCs and MPCs.

On 1/14/16 13:09, Saku Ytti wrote:
> On 14 January 2016 at 23:11, Mark Tinka  wrote:
> 
> Hey,
> 
>>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
>>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
>>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio chipset.
> 
>> I wouldn't be too concerned about the throughput on this line card. If
>> you hit such an issue, you can easily justify the budget for better line
>> cards.
> 
> Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.
> 
> Particularly bad combo would be running with DPCE. Basically any
> MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
> the fabric to redundancy mode. But with 16XGE+SCB you probably don't
> want to.
> 
> There is some sort of policing/limit on how many fabric grands MPC
> gives up, like if your trio needs 40Gbps of capacity, and is operating
> in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
> each SCB.
> However as DPCE won't use the third one, DPCE could only send at
> 26.66Gbps towards that 40Gbps trio. While MPC would happily send
> 40Gbps to MPC.
> 


-- 

Christopher E. Brown      desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

With 2 active SCBs (2+0 non-redundant or 2+1 redundant in a 960) you are 
capable of 120G
per slot.

The worst case throughput in this case would depend on the cards.

For single Trio MPC1 in would be 30Gbit per card (actual 31.7 to ~ 39 depending 
on packet mix)

For dual Trio MPC2 it would be same per half card or 60G total.

For quad trio 16XGE would be 30G per bank of 4 until use SCBE or better than 
40G per bank of 4


I do not recall the specs for those DPCs, as we downchecked all DPCs for other 
reasons and
held off deployment until the release of the MPC2-3D-Q cards.


On 1/14/2016 1:37 PM, Colton Conor wrote:
> So assuming I got with something like this
> setup: 
> http://www.ebay.com/itm/Juniper-MX960BASE-AC-10x-DPC-R-4XGE-XFP-MS-DPC-1yrWrnty-Free-Ship-40x-10G-MX960-/351615023814?hash=item51dde37ac6:g:WUcAAOSwLVZViHc~
> 
> That has RE-2000's, Regular SCBs, and all DPC linecards. Would these all be 
> able to run at
> line rate? 
> 
> On Thu, Jan 14, 2016 at 4:19 PM, Christopher E. Brown 
>  > wrote:
> 
> 
> It is 120Gbit agg in a SCB system as the limit is 120G/slot.
> 
> This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
> 
> So, use 3 of 4 in each bank if you want 100% line rate.
> 
> SCBE increases this to 160Gbit or better (depending on age of chassis)
> allowing for line rate on all 16.
> 
> 
> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
> with, but nobody in their right mind mixes DPCs and MPCs.
> 
> On 1/14/16 13:09, Saku Ytti wrote:
> > On 14 January 2016 at 23:11, Mark Tinka  > wrote:
> >
> > Hey,
> >
> >>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
> >>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
> >>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio 
> chipset.
> >
> >> I wouldn't be too concerned about the throughput on this line card. If
> >> you hit such an issue, you can easily justify the budget for better 
> line
> >> cards.
> >
> > Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.
> >
> > Particularly bad combo would be running with DPCE. Basically any
> > MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
> > the fabric to redundancy mode. But with 16XGE+SCB you probably don't
> > want to.
> >
> > There is some sort of policing/limit on how many fabric grands MPC
> > gives up, like if your trio needs 40Gbps of capacity, and is operating
> > in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
> > each SCB.
> > However as DPCE won't use the third one, DPCE could only send at
> > 26.66Gbps towards that 40Gbps trio. While MPC would happily send
> > 40Gbps to MPC.
> >
> 
> 
> --
> 
> Christopher E. Brown    > 
>  desk (907) 550-8393 
>  cell (907) 632-8492
> 
> IP Engineer - ACS
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> 
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> 


-- 

Christopher E. Brown      desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Adam Vitkovsky


> Jeff
> Sent: Thursday, January 14, 2016 10:47 PM
> To: Mark Tinka; Colton Conor; Tom Storey
> Cc: Juniper List
> Subject: Re: [j-nsp] MX960 with 3 RE's?
>
> Am 14.01.2016 um 06:45 schrieb Mark Tinka:
> > The 64-bit RE gives you more memory to hold more routes, but if you
> > only need 4x full BGP feeds today, the RE-S-2000 should be fine.
> > Naturally, the newer RE will provide longer-term support for later
> > Junos releases (especially with the architectural differences between
> > Junos 15 and anything else before it). But in your case, the RE-S-2000
> > should be just fine.
>
> Can you elaborate on that? Will the RE-2000 not work with JunOS 15+? We
> are actually planning to upgrade to JunOS soon because it supports NDP
> cache protection:
>
> http://www.juniper.net/documentation/en_US/junos15.1/topics/task/confi
> guration/ndp-cache-protection-configuring.html
>
>
> I know it's quite now and not recommended but in my eyes this feature is
> quite important to secure the network unless you want to assign /126 and
> alike  to your customers instead of the recommended, typical /64 and
> increase management overhead for everybody involved.
>
>
What? Now that's a colossal fail, the NDP cache exhaustion as a possible DOS 
vector and particular mitigation techniques are known for years now.
Junos won't stop surprising me.


adam



Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Jeff

Am 14.01.2016 um 23:19 schrieb Christopher E. Brown:



Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
with, but nobody in their right mind mixes DPCs and MPCs.



Why is that? The mentioned 16x 10G card actually sounds interesting but 
is still quite expensive compared to the older DPCEs with 4x 10GE so we 
had them in mind for our growth and are planning to use them together 
with the existing DPCEs. Why wouldn't you do it?



Thanks,
Jeff
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

In the datasheets for the 16XGE since release Juniper specifically called out 
120G due to
bus limit with SCB.

Not in the sheet but actual is guaranteed 30G per bank of 4.

We used first 4 of each bank in heavy core use and behavior is as specified.

With change to SCBE this limit is removed and _all_ 16 ports are line rate 
capable at same
time.

The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the MPC1 
and 2 cards
but the quad trio interconnect in the 16XGE is wired up diff with additional 
helpers and
can do the full 40G per bank.

There is no increase in usable capacity with the MPC1-3D and MPC2-3D cards with 
the SCBE
though, still 30G.


I did not have gear for a 16XGE full test, but after SCBE upgrade I did 
specifically test
at full duplex line rate on all 4 ports in a bank on one card to all 4 ports in 
a bank on
a second card in the same chassis.

On 1/14/2016 2:13 PM, Adam Vitkovsky wrote:
>> Christopher E. Brown
>> Sent: Thursday, January 14, 2016 10:20 PM
>> To: juniper-nsp@puck.nether.net
>> Subject: Re: [j-nsp] MX960 with 3 RE's?
>>
>>
>> It is 120Gbit agg in a SCB system as the limit is 120G/slot.
>>
>> This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
>>
>> So, use 3 of 4 in each bank if you want 100% line rate.
>>
>> SCBE increases this to 160Gbit or better (depending on age of chassis)
>> allowing for line rate on all 16.
>>
> So the 16XGE cards don’t have the limitation of max 38Gbps between Trio and 
> Fabric please?
> 
> 
> adam
> 
> 
> Adam Vitkovsky
> IP Engineer
> 
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
> 
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
> this email are confidential to the ordinary user of the email address to 
> which it was addressed. This email is not intended to create any legal 
> relationship. No one else may place any reliance upon it, or copy or forward 
> all or any of it in any form (unless otherwise notified). If you receive this 
> email in error, please accept our apologies, we would be obliged if you would 
> telephone our postmaster on +44 (0) 808 178 9652 or email 
> postmas...@gamma.co.uk
> 
> Gamma Telecom Limited, a company incorporated in England and Wales, with 
> limited liability, with registered number 04340834, and whose registered 
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of 
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
> 
> 
> 


-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Saku Ytti
On 15 January 2016 at 01:39, Christopher E. Brown
 wrote:
> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the 
> MPC1 and 2 cards
> but the quad trio interconnect in the 16XGE is wired up diff with additional 
> helpers and
> can do the full 40G per bank.


Pretty sure this is not true. There are three factors in play

a) memory bandwidth of trio (mq)
b) lookup performance of trio (lu)
c) fabric capacity

With SCBE 16XGE isn't limited by fabric, but it's still limited by
memory bandwidth and lookup performance.

Memory bandwidth is tricksy, due to how packets are split into cells,
if you have unlucky packet sizes, you're wasting memory bandwidth
sending padding, and can end up having maybe even that 30Gbps
performance, perhaps even less.
If you have lucky packet sizes, and you can do 40Gbps. For me, I work
with half-duplex performance of 70Gbps, and that's been safe bet for
me, in real environments.

Then there is lookup performance, which is like 50-55Mpps, out of
60Mpps of maximum possible.



Why DPCE=>MPC is particularly bad, is because even though MQ could
accept 20Gbps+20Gbps from the fabrics DPCE is sending, it's not, it's
only accepting 13+13, but DPCE isn't using the last fabric to send the
remaining 13Gbps.
So if your traffic flow from single DPCE card (say 4x10GE LACP) is out
to single trio, you'll be limited to just 26Gbps out of 40Gbps.

If your chassis is SCBE or just MPC1/MPC2 you can 'set chassis fabric
chassis redundancy-mode redundant', which causes MPC to use only two
of the fabrics, in which case it'll accept 20+20 from DPCE, allowing
you to get that full 40Gbps from DPCE.
But this also means, if you also have 16XGE and SCB, you can pretty
much use just half of the ports of 16XGE.

If this is confusing, just don't run mixed box, or buy SCBE and run
redundant-mode.



-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

We actually spent a couple years spooling up for a greenfield build of a 
network required
to provide very tight SLAs on a per service bases.

We were working with the MX BU directly for almost 2 years and constantly 
exchanging test
results and loaner/testing cards.

Per specs provided by the designers of the MPC2 and verified by our testing the 
worst case
throughput per trio for a MPC2-3D per or 3D-Q is 31.7G, that is full duplex 
with every
packet going to or from bus and every packet being min length. That is actual 
packet,
after all internal overhead.

Larger packets bring this closer but not all the way to 40G, and traffic 
hairpinning on
same trio can take all the way to 40G.

We did not investigate any impact of MCAST replication on this as MCAST 
replication is not
part of the use case for us.


When the same folks were asked about the 16XGE card and the 120G (and later 
160G)
performance it was indicated that there was an additional layer of logic/asics 
used to tie
all 4 trios in the 16XGE to the bus and that these ASICs offloaded some of the 
bus related
overhead handling from the TRIOs, freeing up enough capacity to allow each TRIO 
in the
16XGE to provide a full 40G duplex after jcell/etc overhead.


On 1/14/2016 3:27 PM, Saku Ytti wrote:
> On 15 January 2016 at 01:39, Christopher E. Brown
>  wrote:
>> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the 
>> MPC1 and 2 cards
>> but the quad trio interconnect in the 16XGE is wired up diff with additional 
>> helpers and
>> can do the full 40G per bank.
> 
> 
> Pretty sure this is not true. There are three factors in play
> 
> a) memory bandwidth of trio (mq)
> b) lookup performance of trio (lu)
> c) fabric capacity
> 
> With SCBE 16XGE isn't limited by fabric, but it's still limited by
> memory bandwidth and lookup performance.
> 
> Memory bandwidth is tricksy, due to how packets are split into cells,
> if you have unlucky packet sizes, you're wasting memory bandwidth
> sending padding, and can end up having maybe even that 30Gbps
> performance, perhaps even less.
> If you have lucky packet sizes, and you can do 40Gbps. For me, I work
> with half-duplex performance of 70Gbps, and that's been safe bet for
> me, in real environments.
> 
> Then there is lookup performance, which is like 50-55Mpps, out of
> 60Mpps of maximum possible.
> 
> 
> 
> Why DPCE=>MPC is particularly bad, is because even though MQ could
> accept 20Gbps+20Gbps from the fabrics DPCE is sending, it's not, it's
> only accepting 13+13, but DPCE isn't using the last fabric to send the
> remaining 13Gbps.
> So if your traffic flow from single DPCE card (say 4x10GE LACP) is out
> to single trio, you'll be limited to just 26Gbps out of 40Gbps.
> 
> If your chassis is SCBE or just MPC1/MPC2 you can 'set chassis fabric
> chassis redundancy-mode redundant', which causes MPC to use only two
> of the fabrics, in which case it'll accept 20+20 from DPCE, allowing
> you to get that full 40Gbps from DPCE.
> But this also means, if you also have 16XGE and SCB, you can pretty
> much use just half of the ports of 16XGE.
> 
> If this is confusing, just don't run mixed box, or buy SCBE and run
> redundant-mode.
> 
> 
> 


-- 

Christopher E. Brown      desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Adam Vitkovsky
> Christopher E. Brown
> Sent: Thursday, January 14, 2016 10:20 PM
> To: juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] MX960 with 3 RE's?
>
>
> It is 120Gbit agg in a SCB system as the limit is 120G/slot.
>
> This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
>
> So, use 3 of 4 in each bank if you want 100% line rate.
>
> SCBE increases this to 160Gbit or better (depending on age of chassis)
> allowing for line rate on all 16.
>
So the 16XGE cards don’t have the limitation of max 38Gbps between Trio and 
Fabric please?


adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread joel jaeggli
On 1/14/16 2:48 PM, Jeff wrote:
> Am 14.01.2016 um 23:19 schrieb Christopher E. Brown:
>>
>>
>> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
>> with, but nobody in their right mind mixes DPCs and MPCs.
>>
> 
> Why is that? The mentioned 16x 10G card actually sounds interesting but
> is still quite expensive compared to the older DPCEs with 4x 10GE so we
> had them in mind for our growth and are planning to use them together
> with the existing DPCEs. Why wouldn't you do it?

different asics, with varying amounts of memory, a long history of
stability problems running jointly that neither had seperately. lack of
feature parity (e.g. inline jflow).

> 
> Thanks,
> Jeff
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 




signature.asc
Description: OpenPGP digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Saku Ytti
On 15 January 2016 at 01:39, Christopher E. Brown
 wrote:
> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the 
> MPC1 and 2 cards
> but the quad trio interconnect in the 16XGE is wired up diff with additional 
> helpers and
> can do the full 40G per bank.

There is actually one difference in MPC1/MPC2 and 16XGE. MPC1/MPC2 do
binary multicast replication, while 16XGE does unary.

So if your box is full of 16X10GE, multicast stream hops from NPU to
NPU, so every NPU gets the stream replicated at different time (i know
stock exchanges running MX and masturbating over nanoseconds in
switching latency, while they are doing three orders of magnitude
larger jitters in MX replication).
While if box is full of MPC1/MPC2, each card or NPU, will send copy to
two other NPU, for dramatically more efficient replication. (Of course
the theoretically correct way is to replicate in fabric, but I'm not
paying one cent for the right solution for this feature which should
be just checkbox item at best)

This also means that if you run multicast, trio needs 80Gbps of fabric
capacity for those 40Gbps of WAN capacity. DPCE/ichip has this, DPCE
is 40Gbps, but has 80Gbps of fabric capacity.
SCBE and MPC2 has this, 80Gbps of WAN capacity and 160Gbps of fabric capacity.
While 16XGE needs just same fabric capacity as WAN capacity.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Mark Tinka


On 15/Jan/16 00:09, Saku Ytti wrote:

> Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.

Yes, but I think the OP wants to buy the SCB.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Mark Tinka


On 14/Jan/16 20:55, Jim Troutman wrote:

> I'm also very curious about inexpensive MX480/MX960 10G routing
> options for use as MPLS  P or maybe PE routers.
>
> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
> than $1k/port used.  But it only supports full bandwidth on 12 ports,
> would be oversubscribed 4:3 if all 16 are in use.  They are Trio chipset.

If you can get that line card for that price, I'd take it.

There are some limitations when doing VLAN-based QoS on this particular
line card, but all other features work well.

I wouldn't be too concerned about the throughput on this line card. If
you hit such an issue, you can easily justify the budget for better line
cards.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Jim Troutman
I'm also very curious about inexpensive MX480/MX960 10G routing options for
use as MPLS  P or maybe PE routers.

A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less than
$1k/port used.  But it only supports full bandwidth on 12 ports, would be
oversubscribed 4:3 if all 16 are in use.  They are Trio chipset.



On Thu, Jan 14, 2016 at 8:32 AM, Colton Conor 
wrote:

> Mark,
>
> Thanks for the information.What is the smallest non-DPC 10G card you would
> recommend? I am probably going to have a hard to getting away with DPC due
> to the cost. A 4 port 10G DPC card can be had for under $1K on the used
> market.
>
> On Wed, Jan 13, 2016 at 11:45 PM, Mark Tinka  wrote:
>
> >
> >
> > On 14/Jan/16 02:07, Colton Conor wrote:
> >
> > Well it would be RE-S-2000-4096 running the JTAC Recommended Junos
> > Software Version Junos 13.3R8 plus the standard (not enhanced SCBs).
> >
> > I know more memory and 64 bit is usually better, but how does this help
> in
> > Junos? From past threads, we have concluded that Junos is currently
> single
> > thread/core in most all situations, and the RE-S-2000-4096 is faster than
> > the RE in a MX80 and MX104. What does the more cores and quadruple memory
> > get you in the RE-S-1800X4-16G that you can not do on a RE-S-2000-4096?
> >
> > The use case for this box would be full BGP tables and routing with 4+
> > providers on 10G ports, plus a couple of ports to a peering exchange.
> >
> >
> > As others running the RE-S-2000 have confirmed, you can run a recent
> Junos
> > release on that RE today, which is great.
> >
> > The 64-bit RE gives you more memory to hold more routes, but if you only
> > need 4x full BGP feeds today, the RE-S-2000 should be fine. Naturally,
> the
> > newer RE will provide longer-term support for later Junos releases
> > (especially with the architectural differences between Junos 15 and
> > anything else before it). But in your case, the RE-S-2000 should be just
> > fine.
> >
> > I am wondering what features the DPC's lack in this situation.
> >
> >
> > - Lots of QoS limitations on the DPC compared to Trio.
> > - Multicast restrictions on the DPC vs. the Trio.
> > - No support for inline jflow on the DPC.
> > - Differences in Tunnel PIC support on the DPC vs. the Trio (Trio is
> > more flexible).
> > - There may also be differences in Carrier Ethernet capabilities.
> >
> > I think you can get away with the RE-S-2000, but if you can, stay away
> > from the DPC, just for peace of mind.
> >
> > Mark.
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>



-- 
Jim Troutman,
jamesltrout...@gmail.com
800-605-0192 (main)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Saku Ytti
On 14 January 2016 at 23:11, Mark Tinka  wrote:

Hey,

>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio chipset.

> I wouldn't be too concerned about the throughput on this line card. If
> you hit such an issue, you can easily justify the budget for better line
> cards.

Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.

Particularly bad combo would be running with DPCE. Basically any
MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
the fabric to redundancy mode. But with 16XGE+SCB you probably don't
want to.

There is some sort of policing/limit on how many fabric grands MPC
gives up, like if your trio needs 40Gbps of capacity, and is operating
in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
each SCB.
However as DPCE won't use the third one, DPCE could only send at
26.66Gbps towards that 40Gbps trio. While MPC would happily send
40Gbps to MPC.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Jérôme Nicolle
Hi,

Le 13/01/2016 05:21, Colton Conor a écrit :
> Then how do they have 3 RE's listed and house in the picture? Is the 3 RE
> in the 3 SBC just in there, but would not be powered on or usable?

Looks like the broker had a few spare SCB and REs and no functionnal
chassis to slap them in, so he invented a bundle and didn't fully test it.

Best regards,

-- 
Jérôme Nicolle
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Mark Tinka


On 14/Jan/16 02:07, Colton Conor wrote:

> Well it would be RE-S-2000-4096 running the JTAC Recommended Junos
> Software Version Junos 13.3R8 plus the standard (not enhanced SCBs).
>
> I know more memory and 64 bit is usually better, but how does this
> help in Junos? From past threads, we have concluded that Junos is
> currently single thread/core in most all situations, and
> the RE-S-2000-4096 is faster than the RE in a MX80 and MX104. What
> does the more cores and quadruple memory get you in
> the RE-S-1800X4-16G that you can not do on a RE-S-2000-4096?
>
> The use case for this box would be full BGP tables and routing with 4+
> providers on 10G ports, plus a couple of ports to a peering exchange.

As others running the RE-S-2000 have confirmed, you can run a recent
Junos release on that RE today, which is great.

The 64-bit RE gives you more memory to hold more routes, but if you only
need 4x full BGP feeds today, the RE-S-2000 should be fine. Naturally,
the newer RE will provide longer-term support for later Junos releases
(especially with the architectural differences between Junos 15 and
anything else before it). But in your case, the RE-S-2000 should be just
fine.

> I am wondering what features the DPC's lack in this situation.

- Lots of QoS limitations on the DPC compared to Trio.
- Multicast restrictions on the DPC vs. the Trio.
- No support for inline jflow on the DPC.
- Differences in Tunnel PIC support on the DPC vs. the Trio (Trio is
more flexible).
- There may also be differences in Carrier Ethernet capabilities.

I think you can get away with the RE-S-2000, but if you can, stay away
from the DPC, just for peace of mind.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Tom Storey
On 13 January 2016 at 22:32, Mark Tinka  wrote:
> A more current RE means you can run more recent Junos releases. I
> haven't run the RE-S-2000 in a while, so not sure how well it's
> supported by current Junos releases (someone else who has the older RE's
> might want to chime in).

Guess it depends how old an RE you are talking about, and how recent JunOS?

Have seen RE-S-2000's running 13.3.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Chris Kawchuk
Used RE-S-2000 w/SCBE and JunOS 14.2 with JAM for MPC3-NG Cards. No issues.

Mostly running 13.3R6 or R8 on most of our core which is dual RE-S-2000's on 
MX480.

- CK.


On 14/01/2016, at 9:11 AM, Tom Storey  wrote:

> On 13 January 2016 at 22:32, Mark Tinka  wrote:
>> A more current RE means you can run more recent Junos releases. I
>> haven't run the RE-S-2000 in a while, so not sure how well it's
>> supported by current Junos releases (someone else who has the older RE's
>> might want to chime in).
> 
> Guess it depends how old an RE you are talking about, and how recent JunOS?
> 
> Have seen RE-S-2000's running 13.3.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Mark Tinka


On 13/Jan/16 20:05, Colton Conor wrote:

>
> Are there any feature limitations I should be aware of using a MX960 with
> the above configuration and older I beleive EOL'd DPC-R-4XGE-XFP
> linecards?  For comparison's sake, I am upgrading from a MX80 using the 4
> 10G ports on the front of the MX80.

The MX80 is a Trio-based platform.

The DPC's are older tech., and some features you might be accustomed to
on the MX80 may not be supported on the DPC's. Don't take this lightly,
there are a lot of things you may take for granted on the Trio whose
absence (or eccentricities) will shock you on the DPC. If you can, avoid
the DPC's.

A more current RE means you can run more recent Junos releases. I
haven't run the RE-S-2000 in a while, so not sure how well it's
supported by current Junos releases (someone else who has the older RE's
might want to chime in). The 64-bit RE's also give you more memory,
which is always useful.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Colton Conor
Well it would be RE-S-2000-4096 running the JTAC Recommended Junos Software
Version Junos 13.3R8 plus the standard (not enhanced SCBs).

I know more memory and 64 bit is usually better, but how does this help in
Junos? From past threads, we have concluded that Junos is currently single
thread/core in most all situations, and the RE-S-2000-4096 is faster than
the RE in a MX80 and MX104. What does the more cores and quadruple memory
get you in the RE-S-1800X4-16G that you can not do on a RE-S-2000-4096?

The use case for this box would be full BGP tables and routing with 4+
providers on 10G ports, plus a couple of ports to a peering exchange. I am
wondering what features the DPC's lack in this situation.

On Wed, Jan 13, 2016 at 4:11 PM, Tom Storey  wrote:

> On 13 January 2016 at 22:32, Mark Tinka  wrote:
> > A more current RE means you can run more recent Junos releases. I
> > haven't run the RE-S-2000 in a while, so not sure how well it's
> > supported by current Junos releases (someone else who has the older RE's
> > might want to chime in).
>
> Guess it depends how old an RE you are talking about, and how recent JunOS?
>
> Have seen RE-S-2000's running 13.3.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Mark Tinka


On 13/Jan/16 12:42, Jérôme Nicolle wrote:

> Looks like the broker had a few spare SCB and REs and no functionnal
> chassis to slap them in, so he invented a bundle and didn't fully test it.

If the price is right, I'd take the extra RE and SCB as backup, in case
not including them does not yield any significant saving.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Colton Conor
Just to confirm though, its the extra RE that is different and not
supported in this config right? The MX960 can use 3 SCB's at once, but only
2 REs? Or do I have the wrong too?



On Wed, Jan 13, 2016 at 8:16 AM, Mark Tinka  wrote:

>
>
> On 13/Jan/16 12:42, Jérôme Nicolle wrote:
>
> > Looks like the broker had a few spare SCB and REs and no functionnal
> > chassis to slap them in, so he invented a bundle and didn't fully test
> it.
>
> If the price is right, I'd take the extra RE and SCB as backup, in case
> not including them does not yield any significant saving.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Daniel Roesen
On Wed, Jan 13, 2016 at 09:06:55AM -0800, joel jaeggli wrote:
> An mx960 has a full fabric with two SCBs.

That depends on the specific MPCs used in the chassis. Some do get
full performance only with all 3 SCBs.

> it is n+1 redundant with 3. e.g. at that point you can swap one
> without cutting the fabric bandwidth in half.

Technically, with MPCs (unlike DPCs), when removing the third SCB
you're removing 1/3rd of the fabric bandwidth as all traffic is
sprayed across all forwarding planes (two per SCB).

With DPCs, only two SCBs (thus four fabric planes) could be active,
with the third SCB being hot standby.

Best regards,
Daniel

-- 
CLUE-RIPE -- Jabber: d...@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Scott Granados
Sounds correct, only 2 routing engines at a time.

> On Jan 13, 2016, at 11:59 AM, Colton Conor  wrote:
> 
> Just to confirm though, its the extra RE that is different and not
> supported in this config right? The MX960 can use 3 SCB's at once, but only
> 2 REs? Or do I have the wrong too?
> 
> 
> 
> On Wed, Jan 13, 2016 at 8:16 AM, Mark Tinka  wrote:
> 
>> 
>> 
>> On 13/Jan/16 12:42, Jérôme Nicolle wrote:
>> 
>>> Looks like the broker had a few spare SCB and REs and no functionnal
>>> chassis to slap them in, so he invented a bundle and didn't fully test
>> it.
>> 
>> If the price is right, I'd take the extra RE and SCB as backup, in case
>> not including them does not yield any significant saving.
>> 
>> Mark.
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Niall Donaghy
That's correct. On MX960 you can have 3 SCBs running 2+1 redundancy or 3+0 
performance mode.
With 2 SCBs you run 2+0, ie: without redundancy. In the case of 1 failing, you 
are 1+0 and on reduced capacity, AFAIK I don't think it's terminal -- just very 
very suboptimal. :)

With SCBE fabric, here are the Gbps per slot rates:
Redundant mode  2+1160 Gbps
Performance mode  3+0240 Gbps

With SCB2E fabric:  According to JNPR latest figures
Redundant mode  2+1360 Gbps
Performance mode  3+0480 Gbps

If you have SCBE fabric and MPC4E cards, then in 2+1 mode you have 280Gbps of 
ports but only 160Gbps on the slot. In 3+0 mode you have 40 Gbps of 
oversubscription if you fully load it, etc.
With MPC3E cards, you can only push 137Gbps even though the slot supports 160 
Gbps.

We have SCB2E cards on the shelf awaiting rollout to our estate and I'm curious 
to see what kind of throughput we actually get on MPC4E.

Finally about the REs, you can physically house a spare RE in the dual purpose 
SCB2 / MPC6 slot but I believe it lacks the additional backplane connections to 
enable the RE to work, hence it lies there dead.

Kind regards,
Niall

> -Original Message-
> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of
> joel jaeggli
> Sent: 13 January 2016 17:07
> To: Colton Conor; Mark Tinka
> Cc: Juniper List
> Subject: Re: [j-nsp] MX960 with 3 RE's?
> 
> On 1/13/16 8:59 AM, Colton Conor wrote:
> > Just to confirm though, its the extra RE that is different and not
> > supported in this config right? The MX960 can use 3 SCB's at once, but
> > only
> > 2 REs? Or do I have the wrong too?
> 
> An mx960 has a full fabric with two SCBs. it is n+1 redundant with 3.
> e.g. at that point you can swap one without cutting the fabric bandwidth in 
> half.
> 
> NSR/GRES uses two REs. a third RE in the chassis is cold and dead. iI's 
> probably
> an acceptable place to store a spare.
> 
> >
> >
> >
> > On Wed, Jan 13, 2016 at 8:16 AM, Mark Tinka <mark.ti...@seacom.mu>
> wrote:
> >
> >>
> >>
> >> On 13/Jan/16 12:42, Jérôme Nicolle wrote:
> >>
> >>> Looks like the broker had a few spare SCB and REs and no functionnal
> >>> chassis to slap them in, so he invented a bundle and didn't fully
> >>> test
> >> it.
> >>
> >> If the price is right, I'd take the extra RE and SCB as backup, in
> >> case not including them does not yield any significant saving.
> >>
> >> Mark.
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >>
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread joel jaeggli
On 1/13/16 8:59 AM, Colton Conor wrote:
> Just to confirm though, its the extra RE that is different and not
> supported in this config right? The MX960 can use 3 SCB's at once, but only
> 2 REs? Or do I have the wrong too?

An mx960 has a full fabric with two SCBs. it is n+1 redundant with 3.
e.g. at that point you can swap one without cutting the fabric bandwidth
in half.

NSR/GRES uses two REs. a third RE in the chassis is cold and dead. iI's
probably an acceptable place to store a spare.

> 
> 
> 
> On Wed, Jan 13, 2016 at 8:16 AM, Mark Tinka  wrote:
> 
>>
>>
>> On 13/Jan/16 12:42, Jérôme Nicolle wrote:
>>
>>> Looks like the broker had a few spare SCB and REs and no functionnal
>>> chassis to slap them in, so he invented a bundle and didn't fully test
>> it.
>>
>> If the price is right, I'd take the extra RE and SCB as backup, in case
>> not including them does not yield any significant saving.
>>
>> Mark.
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 




signature.asc
Description: OpenPGP digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Daniel Dobrijałowski
Hi!

Dnia 13 stycznia 2016 18:08:36 CET, Niall Donaghy 
>Finally about the REs, you can physically house a spare RE in the dual
>purpose SCB2 / MPC6 slot but I believe it lacks the additional
>backplane connections to enable the RE to work, hence it lies there
>dead.

My knowelage about MXes may be outdates, but AFAIR there is even more missing 
than backplane
links. There are internal switches on REs. Each line card has only two ports to 
connect with 2 "switches",
one for each RE. It would require more links in backplane and more internal 
ports on line card, so it would
be incompatible with MPC3E. I have no knowelage about how internal switches on 
newer cards are build.

-- 
Pozdrawiam
Daniel Dobrijałowski
WCSS
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-13 Thread Colton Conor
Thanks for the overwhelming wealth of information.

So sounds like the ideal, supported setup would be a MX960 with 3 SBC, 2
RE-S-2000-4096, and redundant power supplies. For a recommended option, we
would have a third RE-S-2000-4096 as a spare (well really we should have a
spare for everything), and install this spare in the 3 SBC slot to just
hold the spare (it would not be powered or do anything, just be there for a
swap if one of the 2 installed RE's failed instead of sitting on a shelf
somewhere).

Are there any feature limitations I should be aware of using a MX960 with
the above configuration and older I beleive EOL'd DPC-R-4XGE-XFP
linecards?  For comparison's sake, I am upgrading from a MX80 using the 4
10G ports on the front of the MX80.

The RE-S-2000-4096 is more powerful than the RE built into the MX80 (and
MX104), and of course the MX960 has more redundancy. Just wondering on the
software and feature-set side if I am loosing anything assuming they are
running on same JUNOS version. Any gotchas to be concerned about using
RE-S-2000-4096, regular SBCs, and DPC-R-4XGE-XFPs line-cards? We are
pushing less than 80Gbps, so I the back plane capacity is of little concern.


On Wed, Jan 13, 2016 at 11:15 AM, Daniel Roesen  wrote:

> On Wed, Jan 13, 2016 at 09:06:55AM -0800, joel jaeggli wrote:
> > An mx960 has a full fabric with two SCBs.
>
> That depends on the specific MPCs used in the chassis. Some do get
> full performance only with all 3 SCBs.
>
> > it is n+1 redundant with 3. e.g. at that point you can swap one
> > without cutting the fabric bandwidth in half.
>
> Technically, with MPCs (unlike DPCs), when removing the third SCB
> you're removing 1/3rd of the fabric bandwidth as all traffic is
> sprayed across all forwarding planes (two per SCB).
>
> With DPCs, only two SCBs (thus four fabric planes) could be active,
> with the third SCB being hot standby.
>
> Best regards,
> Daniel
>
> --
> CLUE-RIPE -- Jabber: d...@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-12 Thread Colton Conor
Then how do they have 3 RE's listed and house in the picture? Is the 3 RE
in the 3 SBC just in there, but would not be powered on or usable?

On Tue, Jan 12, 2016 at 2:49 PM, Mark Tinka  wrote:

>
>
> On 12/Jan/16 22:40, Colton Conor wrote:
>
> > Is it possible to have 3 RE's in a MX960? For example:
> >
> http://www.ebay.com/itm/Juniper-MX960PREMIUM-DC-ECM-4x-PWR-MX960-DC-3x-SCB-MX960-3x-RE-S-2000-4096-/271739188162?hash=item3f44eaf3c2:g:Z~IAAOSwnDZT8lpv
> > shows 3s RE's installed?
> >
> > The documentation I have seen shows that a MX960 can have 3 SCB's, but it
> > mentions only 2 REs?
>
> The MX960 supports 3x SCB's for the switch fabric.
>
> However, only 2x SCB's can house RE's.
>
> >
> > How does the RE-S-2000-4096 compare to a RES-1800X4-16G?
>
> The latter is 64-bit, faster and supports more memory.
>
> >  How does a the
> > regular SCB compared to the Enhanced SCB?
>
> Faster switch fabric.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-12 Thread Mark Tinka


On 13/Jan/16 06:21, Colton Conor wrote:

> Then how do they have 3 RE's listed and house in the picture? Is the 3
> RE in the 3 SBC just in there, but would not be powered on or usable?

Correct.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-12 Thread Masood Ahmad Shah
RE can only be installed into the SCBs labeled 0 and 1, third additional
multi-functioning slot labeled 2/6 supports either a SCB (NO RE) or FPC
(aka MPC,DPC). Something like
https://www.safaribooksonline.com/library/view/juniper-mx-series/9781449358143/httpatomoreillycomsourceoreillyimages1327907.png.jpg

Cheers,
Masood

On Wed, Jan 13, 2016 at 3:21 PM, Colton Conor 
wrote:

> Then how do they have 3 RE's listed and house in the picture? Is the 3 RE
> in the 3 SBC just in there, but would not be powered on or usable?
>
> On Tue, Jan 12, 2016 at 2:49 PM, Mark Tinka  wrote:
>
> >
> >
> > On 12/Jan/16 22:40, Colton Conor wrote:
> >
> > > Is it possible to have 3 RE's in a MX960? For example:
> > >
> >
> http://www.ebay.com/itm/Juniper-MX960PREMIUM-DC-ECM-4x-PWR-MX960-DC-3x-SCB-MX960-3x-RE-S-2000-4096-/271739188162?hash=item3f44eaf3c2:g:Z~IAAOSwnDZT8lpv
> > > shows 3s RE's installed?
> > >
> > > The documentation I have seen shows that a MX960 can have 3 SCB's, but
> it
> > > mentions only 2 REs?
> >
> > The MX960 supports 3x SCB's for the switch fabric.
> >
> > However, only 2x SCB's can house RE's.
> >
> > >
> > > How does the RE-S-2000-4096 compare to a RES-1800X4-16G?
> >
> > The latter is 64-bit, faster and supports more memory.
> >
> > >  How does a the
> > > regular SCB compared to the Enhanced SCB?
> >
> > Faster switch fabric.
> >
> > Mark.
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-12 Thread Mark Tinka


On 12/Jan/16 22:40, Colton Conor wrote:

> Is it possible to have 3 RE's in a MX960? For example:
> http://www.ebay.com/itm/Juniper-MX960PREMIUM-DC-ECM-4x-PWR-MX960-DC-3x-SCB-MX960-3x-RE-S-2000-4096-/271739188162?hash=item3f44eaf3c2:g:Z~IAAOSwnDZT8lpv
> shows 3s RE's installed?
>
> The documentation I have seen shows that a MX960 can have 3 SCB's, but it
> mentions only 2 REs?

The MX960 supports 3x SCB's for the switch fabric.

However, only 2x SCB's can house RE's.

>
> How does the RE-S-2000-4096 compare to a RES-1800X4-16G?

The latter is 64-bit, faster and supports more memory.

>  How does a the
> regular SCB compared to the Enhanced SCB?

Faster switch fabric.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp