Re: [j-nsp] MPC5EQ Feedback?

2017-11-02 Thread adamv0025
> From: Pavel Lunin [mailto:plu...@gmail.com]
> Sent: Wednesday, November 01, 2017 4:08 PM
> 
> 
> 
> There were two versions of MPC3:
> 
> 1. MPC3 non-NG, which has a single XM buffer manager and four LU chips
> (the old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
>
Yeah stay away from those, have fundamental design flaws apart from being 
hugely underpowered (especially with queuing chip enabled). 

> 2. MPC3-NG which is based on exactly the same chipset as MPC5, based on
> XM+XL.
>
Yes the building blocks are the same but the architecture of 5 is completely 
different, it's two XMs talking to common XL. So is it two PFEs or just one 
PFE? If it's two I'd suspect there to be a separate set of VOQs per XM so that 
each XM can backpressure to ingress LC independently. Also if the common XL 
gets oversubscribed does it backpressure to both XMs or just the one 
responsible for most of its cycles? Two discrete arbiters feeding one common XL 
is just asking for trouble. 


> 
> MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a
> new more "performant" microcode.
>
4 is the other way around to 5 it has two gen1 LUs hooked up to one gen2 XM 
(mixing old and new hence Trio 1.5) -again some experiment or need to keep up 
with the competition. 

> 
> XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also
> multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4
> these cores have a shared memory, so they don't suffer from some
> limitations (like not very precise policers) which you can face with multi-LU
> PFE architectures.
> 
> MPC7 has a completely new single core 400G chip (also present in the
> recently announced MX204 and MX10003).
> This said, I find MPC4 quite not bad in most scenarios. Never had any issues,
> specific to its architecture.
> 
> P. S. Finally this choice is all about money/performance.
> 
As I mentioned in the other discussion most of the folks don't really need to 
dig down to how routers actually work cause either the design does not allow 
the router to get clogged or in cases where ASICs get overloaded customers 
don’t care and understand (...oh I see you had a DDoS situation, yeah that’s 
understandable then that my VPN traffic experienced drops, thank you bye). 
In these cases there's really no point in spending more on the premium kit from 
cisco. 

adam
  

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Scott Harvanek
David,

Thanks for pointing that out, I did read that and understand the available 
options/limitations between the 10G/40G interfaces. :)

Scott H



> On Nov 1, 2017, at 11:34 AM, Hunter, David B.  wrote:
> 
> Scott,
> 
> Just FYI, you may already be aware of this, but there was one limitation with 
> the MPC5E that we ran into.  We use the MPC5E-40G10G, which are working fine 
> in our data center for 40Gbs service.  I think the EQ version is the same in 
> regards to the limitation we discovered.  At the time we purchased the cards, 
> it wasn’t well documented as to how the 10 and 40Gbs ports could be used.  
> Juniper has since updated the documentation to specify the allowed port usage.
> 
> The card is not oversubscribed, the limit seems to be a hard 240Gbs for the 
> entire card imposed by the way in which the PICs can be used.
> 
> https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mpc5eq-6x40ge-24x10ge.html
>  
> 
> 
> “...Supports one of the following port combinations:
> • Six 40-Gigabit Ethernet ports
> • Twenty-four 10-Gigabit Ethernet ports
> • Three 40-Gigabit Ethernet ports and twelve 10-Gigabit Ethernet ports”
> 
> Also see:
> 
> https://www.juniper.net/documentation/en_US/junos/topics/reference/general/active-pics-mpc5e-guidelines.html
>  
> 
> 
> David B. Hunter
> IU Network Design Engineer
> Indiana University
> 317-278-4873
> davbh...@iu.edu
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Hunter, David B.
Scott,

Just FYI, you may already be aware of this, but there was one limitation with 
the MPC5E that we ran into.  We use the MPC5E-40G10G, which are working fine in 
our data center for 40Gbs service.  I think the EQ version is the same in 
regards to the limitation we discovered.  At the time we purchased the cards, 
it wasn’t well documented as to how the 10 and 40Gbs ports could be used.  
Juniper has since updated the documentation to specify the allowed port usage.

The card is not oversubscribed, the limit seems to be a hard 240Gbs for the 
entire card imposed by the way in which the PICs can be used.

https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mpc5eq-6x40ge-24x10ge.html
 


“...Supports one of the following port combinations:
• Six 40-Gigabit Ethernet ports
• Twenty-four 10-Gigabit Ethernet ports
• Three 40-Gigabit Ethernet ports and twelve 10-Gigabit Ethernet ports”

Also see:

https://www.juniper.net/documentation/en_US/junos/topics/reference/general/active-pics-mpc5e-guidelines.html
 


David B. Hunter
IU Network Design Engineer
Indiana University
317-278-4873
davbh...@iu.edu

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Pavel Lunin
Worth to note that XL-based PFEs have much more SRAM and are capable to
hold like ~10M IPv4 LPM records in the FIB in contrast to ~2.4M for LU
(I've never reached the limit, so these numbers are rather what I've read /
been told here and there). And, of course, if you have both LU and XL-based
cards in the same box, your FIB is limited by the "lowest common
denominator".

2017-11-01 17:07 GMT+01:00 Pavel Lunin :

>
>
> There were two versions of MPC3:
>
> 1. MPC3 non-NG, which has a single XM buffer manager and four LU chips
> (the old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
> 2. MPC3-NG which is based on exactly the same chipset as MPC5, based on
> XM+XL.
>
> MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a
> new more "performant" microcode.
>
> XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also
> multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4
> these cores have a shared memory, so they don't suffer from some
> limitations (like not very precise policers) which you can face with
> multi-LU PFE architectures.
>
> MPC7 has a completely new single core 400G chip (also present in the
> recently announced MX204 and MX10003).
>
> This said, I find MPC4 quite not bad in most scenarios. Never had any
> issues, specific to its architecture.
>
> P. S. Finally this choice is all about money/performance.
>
>
> Kind regards,
> Pavel
>
>
> 2017-11-01 16:46 GMT+01:00 Scott Harvanek :
>
>> Adam,
>>
>> I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and
>> XQ chips?  Just the MPC5E has two XM chips.
>>
>> Scott H
>>
>>
>>
>> > On Nov 1, 2017, at 10:28 AM,  <
>> adamv0...@netconsultings.com> wrote:
>> >
>> >> Scott Harvanek
>> >> Sent: Tuesday, October 31, 2017 6:57 PM
>> >>
>> >> Hey folks,
>> >>
>> >> We have some MX480s we need to add queuing capable 10G/40G ports to
>> >> and it looks like MPC5EQ-40G10G is going to be our most cost effective
>> >> solution.  Has anyone run into any limitations with these MPCs that
>> aren’t
>> >> clearly documented?
>> >>
>> >> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently
>> we’re
>> >> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
>> >> do the same on this along with the adding of the 40G ports? Any Layer3
>> >> limitations or the normal 2MM/6MM FIB/RIB?
>> >>
>> > Hey Scott,
>> > I'd rather go with a standard Trio architecture i.e. one lookup block
>> one buffering block (and one queuing block) -so mpc3 or mpc7.
>> > To me it seems like 4 and 5 are just experiments with the Trio
>> architecture that did not stood the test of time.
>> >
>> > adam
>> >
>> >
>>
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Scott Harvanek
Pavel,

Thank you for the detailed comments, this is basically what I understood to be 
the case.  I’m running MPC3 NG EQs right now which as you note is the same as 
the MPC5, we haven’t had any issues so it sounds like the MPC5E should achieve 
what we need and operate as we expect.

Scott H



> On Nov 1, 2017, at 11:07 AM, Pavel Lunin  wrote:
> 
> 
> 
> There were two versions of MPC3:
> 
> 1. MPC3 non-NG, which has a single XM buffer manager and four LU chips (the 
> old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
> 2. MPC3-NG which is based on exactly the same chipset as MPC5, based on XM+XL.
> 
> MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a 
> new more "performant" microcode.
> 
> XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also 
> multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4 these 
> cores have a shared memory, so they don't suffer from some limitations (like 
> not very precise policers) which you can face with multi-LU PFE architectures.
> 
> MPC7 has a completely new single core 400G chip (also present in the recently 
> announced MX204 and MX10003).
> 
> This said, I find MPC4 quite not bad in most scenarios. Never had any issues, 
> specific to its architecture.
> 
> P. S. Finally this choice is all about money/performance.
> 
> 
> Kind regards,
> Pavel
> 
> 
> 2017-11-01 16:46 GMT+01:00 Scott Harvanek  >:
> Adam,
> 
> I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and XQ 
> chips?  Just the MPC5E has two XM chips.
> 
> Scott H
> 
> 
> 
> > On Nov 1, 2017, at 10:28 AM,  > >  > > wrote:
> >
> >> Scott Harvanek
> >> Sent: Tuesday, October 31, 2017 6:57 PM
> >>
> >> Hey folks,
> >>
> >> We have some MX480s we need to add queuing capable 10G/40G ports to
> >> and it looks like MPC5EQ-40G10G is going to be our most cost effective
> >> solution.  Has anyone run into any limitations with these MPCs that aren’t
> >> clearly documented?
> >>
> >> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently we’re
> >> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
> >> do the same on this along with the adding of the 40G ports? Any Layer3
> >> limitations or the normal 2MM/6MM FIB/RIB?
> >>
> > Hey Scott,
> > I'd rather go with a standard Trio architecture i.e. one lookup block one 
> > buffering block (and one queuing block) -so mpc3 or mpc7.
> > To me it seems like 4 and 5 are just experiments with the Trio architecture 
> > that did not stood the test of time.
> >
> > adam
> >
> >
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> 
> https://puck.nether.net/mailman/listinfo/juniper-nsp 
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Pavel Lunin
There were two versions of MPC3:

1. MPC3 non-NG, which has a single XM buffer manager and four LU chips (the
old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
2. MPC3-NG which is based on exactly the same chipset as MPC5, based on
XM+XL.

MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a
new more "performant" microcode.

XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also
multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4
these cores have a shared memory, so they don't suffer from some
limitations (like not very precise policers) which you can face with
multi-LU PFE architectures.

MPC7 has a completely new single core 400G chip (also present in the
recently announced MX204 and MX10003).

This said, I find MPC4 quite not bad in most scenarios. Never had any
issues, specific to its architecture.

P. S. Finally this choice is all about money/performance.


Kind regards,
Pavel


2017-11-01 16:46 GMT+01:00 Scott Harvanek :

> Adam,
>
> I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and
> XQ chips?  Just the MPC5E has two XM chips.
>
> Scott H
>
>
>
> > On Nov 1, 2017, at 10:28 AM,  <
> adamv0...@netconsultings.com> wrote:
> >
> >> Scott Harvanek
> >> Sent: Tuesday, October 31, 2017 6:57 PM
> >>
> >> Hey folks,
> >>
> >> We have some MX480s we need to add queuing capable 10G/40G ports to
> >> and it looks like MPC5EQ-40G10G is going to be our most cost effective
> >> solution.  Has anyone run into any limitations with these MPCs that
> aren’t
> >> clearly documented?
> >>
> >> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently
> we’re
> >> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
> >> do the same on this along with the adding of the 40G ports? Any Layer3
> >> limitations or the normal 2MM/6MM FIB/RIB?
> >>
> > Hey Scott,
> > I'd rather go with a standard Trio architecture i.e. one lookup block
> one buffering block (and one queuing block) -so mpc3 or mpc7.
> > To me it seems like 4 and 5 are just experiments with the Trio
> architecture that did not stood the test of time.
> >
> > adam
> >
> >
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Scott Harvanek
Adam,

I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and XQ 
chips?  Just the MPC5E has two XM chips.  

Scott H



> On Nov 1, 2017, at 10:28 AM,  
>  wrote:
> 
>> Scott Harvanek
>> Sent: Tuesday, October 31, 2017 6:57 PM
>> 
>> Hey folks,
>> 
>> We have some MX480s we need to add queuing capable 10G/40G ports to
>> and it looks like MPC5EQ-40G10G is going to be our most cost effective
>> solution.  Has anyone run into any limitations with these MPCs that aren’t
>> clearly documented?
>> 
>> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently we’re
>> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
>> do the same on this along with the adding of the 40G ports? Any Layer3
>> limitations or the normal 2MM/6MM FIB/RIB?
>> 
> Hey Scott,
> I'd rather go with a standard Trio architecture i.e. one lookup block one 
> buffering block (and one queuing block) -so mpc3 or mpc7. 
> To me it seems like 4 and 5 are just experiments with the Trio architecture 
> that did not stood the test of time. 
> 
> adam
> 
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread adamv0025
> Scott Harvanek
> Sent: Tuesday, October 31, 2017 6:57 PM
> 
> Hey folks,
> 
> We have some MX480s we need to add queuing capable 10G/40G ports to
> and it looks like MPC5EQ-40G10G is going to be our most cost effective
> solution.  Has anyone run into any limitations with these MPCs that aren’t
> clearly documented?
> 
> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently we’re
> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
> do the same on this along with the adding of the 40G ports? Any Layer3
> limitations or the normal 2MM/6MM FIB/RIB?
> 
Hey Scott,
I'd rather go with a standard Trio architecture i.e. one lookup block one 
buffering block (and one queuing block) -so mpc3 or mpc7. 
To me it seems like 4 and 5 are just experiments with the Trio architecture 
that did not stood the test of time. 

adam


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp