Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Joe Horton via juniper-nsp
--- Begin Message ---
Yes those four points are all very valid.

Just wanted to clear up that using 100G didn’t “lose” the extra 40G of capacity.

Joe

From: Brian Johnson 
Date: Wednesday, May 6, 2020 at 1:58 PM
To: Joe Horton 
Cc: Tobias Heister , "juniper-nsp@puck.nether.net" 

Subject: Re: [j-nsp] Rate selectability on MPC7E-MRATE

[External Email. Be cautious of content]

Still… All of my points are valid.

- Brian



On May 6, 2020, at 1:48 PM, Joe Horton 
mailto:jhor...@juniper.net>> wrote:

Brian,

I'm not sure who you team is, but the other guys are correct.  You can 
provision the addition ports using port mode.
Just enabling the 100G ports does not disable the use of the additional 40G.  
Been there done that, fully supported.
If you've got something specific from your team or JTAC otherwise, and don't 
feel right sharing it on the forum, feel free to reach out directly.

Joe


On 5/6/20, 1:45 PM, "juniper-nsp on behalf of Brian Johnson" 
mailto:juniper-nsp-boun...@puck.nether.net>
 on behalf of brian.john...@netgeek.us> wrote:

   [External Email. Be cautious of content]


   Several points.

   1. Configuration examples explaining how something is configured are not 
supposed to imply that this is how you should configure it or even that the 
exact configuration is valid. The example configuration could allow for 
over-subscription if port 5 were added at 100G.

   2. The MPC7 card can be RTU licensed to 50% and 75% of the ports. Not 
following licensing restrictions on port usage will void support.

   3. Junos will let you configure all kinds of things that will either not 
work or break later. It’s a feature. ;)

   4. Be sure you fully understand what you are doing before implementing it 
and checking with Juniper to be sure it is a supported configuration is not a 
bad idea when there is a cloudy understanding of the features. I work with 
customers all of the time on the Juniper MX product line and this card is still 
very misunderstood (even by me occasionally).

   My advice would be to validate what you are doing with JTAC before 
implementing In production.

   - Brian


On May 6, 2020, at 12:47 PM, Tobias Heister 
mailto:li...@tobias-heister.de>> wrote:

On 06.05.2020 18:24, Brian Johnson wrote:

A wise man once told me… “Just because you can do something, doesn’t mean you 
should”. More specific, “Just because you can do it in the Junos config, 
doesn’t mean it’s supported.” Junipers licensing “honor system” required 
honorable intentions. ;)

I would say it is supported. Even the documentation has an example where one 
port of the group is 100GE and two others are 10GE:
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level

Also with MPC7 its not like its honor based to only use 240G per PFE ... its a 
hard limit ;)

If you run in PIC Mode with 100GE set, than in deed the other ports are 
disabled:
"For example, if you choose to configure PIC 0 at 100-Gbps speed, only ports 2 
and 5 of PIC 0 operate at 100-Gbps speed, while the other ports of the PIC are 
disabled."
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-on-mpc7e-multi-rate-to-enable-different-port-speeds

I mean what else should it do, there are only two 100GE Ports per PFE anyway ;)

--
Kind Regards
Tobias Heister
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!NEt6yMaO-gk!UcKakRO5p3-uKQPwDTlCL8Nb20WMoMb5O4S2qhF853FKOIdf_Ypfj7LNl-UZ_HqQ$

   ___
   juniper-nsp mailing list 
juniper-nsp@puck.nether.net
   
https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!NEt6yMaO-gk!UcKakRO5p3-uKQPwDTlCL8Nb20WMoMb5O4S2qhF853FKOIdf_Ypfj7LNl-UZ_HqQ$


Juniper Business Use Only



Juniper Business Use Only
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Vincentz Petzholtz
Hi Tobias,

Clearly our last beer together happened too long ago ;-)
I haven’t referred to the kb article or anything … I just wanted to mention 
that you can use 100G and 40G ports on the same pic
as long as you don’t go above 240G per pic. Of course setting $whatever to 100G 
only will not work.

But this
set chassis fpc 1 pic 0 port 0 speed 40g
set chassis fpc 1 pic 0 port 2 speed 100g
set chassis fpc 1 pic 0 port 5 speed 100g
set chassis fpc 1 pic 1 port 0 speed 40g
set chassis fpc 1 pic 1 port 2 speed 100g
set chassis fpc 1 pic 1 port 5 speed 100g
works fine.

Just my 2 cents.

Best regards,
Vincentz

> Am 06.05.2020 um 20:40 schrieb Tobias Heister :
> 
> Hi,
> 
> On 06.05.2020 20:15, Vincentz Petzholtz wrote:
>> That’s not true and you sad it yourself. You have 240G per PIC.
>> With 2x100G ports enabled you can still set one remaining port to 40G on the 
>> same pic.
>> And it also works just fine.
> 
> Which part of my last mail is not true? Maybe i did not make myself clear 
> enough?
> 
> If you configure the PIC in per port mode/level, you can different modes per 
> port and hence have 2x100GE and 1x40/4x10GE on the same PIC. See the first 
> part of my last mail.
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level
> 
> If you configure the PIC in PIC speed mode/level 100GE you will only have 
> 2x100GE and nothing else per pic, as described in the second part of my last 
> mail
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-pic-level
> 
> The later could make sense if you use e.g. SCBE2 and want 2+1/1+1 mode and 
> not 3+0/2+0 to reduce the level of card to fabric/slot "oversub"as you only 
> get up to 480G per Slot on that SCBE2 if all boards are active at the same 
> time.
> 
> --
> Kind Regards
> Tobias Heister



signature.asc
Description: Message signed with OpenPGP
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Brian Johnson
Still… All of my points are valid.

- Brian


> On May 6, 2020, at 1:48 PM, Joe Horton  wrote:
> 
> Brian,
> 
> I'm not sure who you team is, but the other guys are correct.  You can 
> provision the addition ports using port mode.
> Just enabling the 100G ports does not disable the use of the additional 40G.  
> Been there done that, fully supported.
> If you've got something specific from your team or JTAC otherwise, and don't 
> feel right sharing it on the forum, feel free to reach out directly.
> 
> Joe
> 
> 
> On 5/6/20, 1:45 PM, "juniper-nsp on behalf of Brian Johnson" 
>   on behalf of 
> brian.john...@netgeek.us > wrote:
> 
>[External Email. Be cautious of content]
> 
> 
>Several points.
> 
>1. Configuration examples explaining how something is configured are not 
> supposed to imply that this is how you should configure it or even that the 
> exact configuration is valid. The example configuration could allow for 
> over-subscription if port 5 were added at 100G.
> 
>2. The MPC7 card can be RTU licensed to 50% and 75% of the ports. Not 
> following licensing restrictions on port usage will void support.
> 
>3. Junos will let you configure all kinds of things that will either not 
> work or break later. It’s a feature. ;)
> 
>4. Be sure you fully understand what you are doing before implementing it 
> and checking with Juniper to be sure it is a supported configuration is not a 
> bad idea when there is a cloudy understanding of the features. I work with 
> customers all of the time on the Juniper MX product line and this card is 
> still very misunderstood (even by me occasionally).
> 
>My advice would be to validate what you are doing with JTAC before 
> implementing In production.
> 
>- Brian
> 
>> On May 6, 2020, at 12:47 PM, Tobias Heister  wrote:
>> 
>> On 06.05.2020 18:24, Brian Johnson wrote:
>>> A wise man once told me… “Just because you can do something, doesn’t mean 
>>> you should”. More specific, “Just because you can do it in the Junos 
>>> config, doesn’t mean it’s supported.” Junipers licensing “honor system” 
>>> required honorable intentions. ;)
>> 
>> I would say it is supported. Even the documentation has an example where one 
>> port of the group is 100GE and two others are 10GE:
>> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level
>> 
>> Also with MPC7 its not like its honor based to only use 240G per PFE ... its 
>> a hard limit ;)
>> 
>> If you run in PIC Mode with 100GE set, than in deed the other ports are 
>> disabled:
>> "For example, if you choose to configure PIC 0 at 100-Gbps speed, only ports 
>> 2 and 5 of PIC 0 operate at 100-Gbps speed, while the other ports of the PIC 
>> are disabled."
>> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-on-mpc7e-multi-rate-to-enable-different-port-speeds
>> 
>> I mean what else should it do, there are only two 100GE Ports per PFE anyway 
>> ;)
>> 
>> --
>> Kind Regards
>> Tobias Heister
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!NEt6yMaO-gk!UcKakRO5p3-uKQPwDTlCL8Nb20WMoMb5O4S2qhF853FKOIdf_Ypfj7LNl-UZ_HqQ$
>>  
>> 
> 
>___
>juniper-nsp mailing list juniper-nsp@puck.nether.net 
> 
>
> https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!NEt6yMaO-gk!UcKakRO5p3-uKQPwDTlCL8Nb20WMoMb5O4S2qhF853FKOIdf_Ypfj7LNl-UZ_HqQ$
>  
> 
> 
> 
> Juniper Business Use Only

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Joe Horton via juniper-nsp
--- Begin Message ---
Brian,

I'm not sure who you team is, but the other guys are correct.  You can 
provision the addition ports using port mode.
Just enabling the 100G ports does not disable the use of the additional 40G.  
Been there done that, fully supported.
If you've got something specific from your team or JTAC otherwise, and don't 
feel right sharing it on the forum, feel free to reach out directly.

Joe


On 5/6/20, 1:45 PM, "juniper-nsp on behalf of Brian Johnson" 
 
wrote:

[External Email. Be cautious of content]


Several points.

1. Configuration examples explaining how something is configured are not 
supposed to imply that this is how you should configure it or even that the 
exact configuration is valid. The example configuration could allow for 
over-subscription if port 5 were added at 100G.

2. The MPC7 card can be RTU licensed to 50% and 75% of the ports. Not 
following licensing restrictions on port usage will void support.

3. Junos will let you configure all kinds of things that will either not 
work or break later. It’s a feature. ;)

4. Be sure you fully understand what you are doing before implementing it 
and checking with Juniper to be sure it is a supported configuration is not a 
bad idea when there is a cloudy understanding of the features. I work with 
customers all of the time on the Juniper MX product line and this card is still 
very misunderstood (even by me occasionally).

My advice would be to validate what you are doing with JTAC before 
implementing In production.

- Brian

> On May 6, 2020, at 12:47 PM, Tobias Heister  
wrote:
>
> On 06.05.2020 18:24, Brian Johnson wrote:
>> A wise man once told me… “Just because you can do something, doesn’t 
mean you should”. More specific, “Just because you can do it in the Junos 
config, doesn’t mean it’s supported.” Junipers licensing “honor system” 
required honorable intentions. ;)
>
> I would say it is supported. Even the documentation has an example where 
one port of the group is 100GE and two others are 10GE:
> 
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level
>
> Also with MPC7 its not like its honor based to only use 240G per PFE ... 
its a hard limit ;)
>
> If you run in PIC Mode with 100GE set, than in deed the other ports are 
disabled:
> "For example, if you choose to configure PIC 0 at 100-Gbps speed, only 
ports 2 and 5 of PIC 0 operate at 100-Gbps speed, while the other ports of the 
PIC are disabled."
> 
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-on-mpc7e-multi-rate-to-enable-different-port-speeds
>
> I mean what else should it do, there are only two 100GE Ports per PFE 
anyway ;)
>
> --
> Kind Regards
> Tobias Heister
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> 
https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!NEt6yMaO-gk!UcKakRO5p3-uKQPwDTlCL8Nb20WMoMb5O4S2qhF853FKOIdf_Ypfj7LNl-UZ_HqQ$

___
juniper-nsp mailing list juniper-nsp@puck.nether.net

https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!NEt6yMaO-gk!UcKakRO5p3-uKQPwDTlCL8Nb20WMoMb5O4S2qhF853FKOIdf_Ypfj7LNl-UZ_HqQ$


Juniper Business Use Only
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Joe Horton via juniper-nsp
--- Begin Message ---
First, I would like to clarify that yes you can provision up to 240G of 
capacity per PFE/PIC on the MPC7.
It can be any combination of 100/40/10, just as long as the total doesn't go 
over 240, noting that only certain ports are 100G capable.

Second, to Tobias' comments, yes that is a good suggestion and I'll pass it on 
internally.

Also, while I haven't explicitly tested this, and I'll try if I can get lab 
access and have some cycles, in my prior experience, changing the chassis 
portion of the configuration doesn't change anything until you restart the PIC.
Of course, if someone also changes the "interfaces" stanza that new config 
would be attempted but fail.  But changing the chassis portion alone won't 
"kill" an active port, at least not without a reset.  So some heavy annotations 
in that portion of the configuration are probably a good idea on a large team.
Second, inserting an optic doesn't do anything either, in face the optics 
basically won't work until you get the chassis portion of the config to match 
(ran into that myself in early testing prior to learning you have to reset the 
PIC)
So operationally you are pretty safe from someone unexpected installing an 
optic.

Joe

On 5/6/20, 12:42 PM, "juniper-nsp on behalf of Tobias Heister" 
 
wrote:

[External Email. Be cautious of content]


Hi,

On 06.05.2020 18:03, Chris Wopat wrote:
> On Wed, May 6, 2020 at 9:41 AM Brian Johnson  
wrote:
>>
>> So you have a 4x10G breakout and a 100G QSFP28 in the same group of 3 
interfaces and they are all working? Just because I can install and configure 
the optics, doesn’t mean they will function. This would conflict with what is 
coming from Juniper Product teams.
>>
>> To be clear, I realize that the ports do not “disappear” because you 
insert the QSP28 into the port group, just that they will not work. :)
>
> We've been this with MPC7s, works fine. You can squeeze the 240g out
> of each PIC just fine, you simply cannot oversub.
>
>  fpc 7 {
>  pic 0 {
>  port 2 {
>  speed 100g;
>  }
>  port 4 {
>  speed 10g;
>  }
>  port 5 {
>  speed 100g;
>  }
>  }
>  }
>
> ports 4 and 5 in same 'group of 3', et-7/05 up at 100g and
> xe-7/0/4:[0-3] up at 10g.

I always wondered whether there is an explicit knob to disable a port in 
order to prevent accidental wrong configs or transceivers inserts down the 
road. Of course you can annotate the existing ports or the pic, but besides 
that. Also what happens if somebody plugs in a transceiver into any of the 
remaining ports? Will the setup just fall apart?

You have 6 Ports per PFE and if you do 100GE on two of them you will end up 
with something similar to the above config (you can choose whether to do 40 or 
10GE on one of the ports).  Which leaves three interfaces unconfigured or not 
listed in the config. In fact whenever one port is configured to 100G you will 
"loose" at least one of the ports and have to leave it not listed in config for 
things to work.

If at some point in the future somebody configures any of the remaining 
ports for an invalid  speed it will not work. Even worse default mode for 
MPC7E-MRATE is to fallback to 10GE Mode on all ports on invalid config which 
could kill your 100GE production ports. Luckily you have to bounce the PFE for 
speed changes, which could be even worse if your wrong config hits you during 
your next reboot if you do not mind the alarms ;)

"If rate selectability is not configured or if invalid port speeds are 
configured, each port operates as four 10-Gigabit Ethernet interface"
"When you change an existing port speed configuration at the port level, 
you must reset the MPC7E-MRATE PIC for the configuration to take effect. An 
alarm is generated indicating the change in port speed configuration."


https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html

So it would be great to have a config option to explicitly disable specific 
ports and not just leave them unconfigured. Of course you can also misconfigure 
any of the disabled ports into a unsupported speed combo, but it would be a bit 
more visible that they are disabled by intention.

You probably could configure all ports and literally "deactivate" the 
configs that you do not want to be enabled and annotate that, but it feels a 
bit clunky.

Especially on boxes like MX204 and MX10003 we would always explicitly 
configure the ports into a valid config combination to prevent somebody from 
putting in transceivers and the box trying to be smart and mess up your ports. 
I think you cannot easily do that on the MPC7

--
Kind Regards
Tobias Heister

Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Brian Johnson
Several points.

1. Configuration examples explaining how something is configured are not 
supposed to imply that this is how you should configure it or even that the 
exact configuration is valid. The example configuration could allow for 
over-subscription if port 5 were added at 100G.

2. The MPC7 card can be RTU licensed to 50% and 75% of the ports. Not following 
licensing restrictions on port usage will void support.

3. Junos will let you configure all kinds of things that will either not work 
or break later. It’s a feature. ;)

4. Be sure you fully understand what you are doing before implementing it and 
checking with Juniper to be sure it is a supported configuration is not a bad 
idea when there is a cloudy understanding of the features. I work with 
customers all of the time on the Juniper MX product line and this card is still 
very misunderstood (even by me occasionally).

My advice would be to validate what you are doing with JTAC before implementing 
In production.

- Brian

> On May 6, 2020, at 12:47 PM, Tobias Heister  wrote:
> 
> On 06.05.2020 18:24, Brian Johnson wrote:
>> A wise man once told me… “Just because you can do something, doesn’t mean 
>> you should”. More specific, “Just because you can do it in the Junos config, 
>> doesn’t mean it’s supported.” Junipers licensing “honor system” required 
>> honorable intentions. ;)
> 
> I would say it is supported. Even the documentation has an example where one 
> port of the group is 100GE and two others are 10GE:
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level
> 
> Also with MPC7 its not like its honor based to only use 240G per PFE ... its 
> a hard limit ;)
> 
> If you run in PIC Mode with 100GE set, than in deed the other ports are 
> disabled:
> "For example, if you choose to configure PIC 0 at 100-Gbps speed, only ports 
> 2 and 5 of PIC 0 operate at 100-Gbps speed, while the other ports of the PIC 
> are disabled."
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-on-mpc7e-multi-rate-to-enable-different-port-speeds
> 
> I mean what else should it do, there are only two 100GE Ports per PFE anyway 
> ;)
> 
> -- 
> Kind Regards
> Tobias Heister
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Tobias Heister

Hi,

On 06.05.2020 20:15, Vincentz Petzholtz wrote:

That’s not true and you sad it yourself. You have 240G per PIC.
With 2x100G ports enabled you can still set one remaining port to 40G on the 
same pic.
And it also works just fine.


Which part of my last mail is not true? Maybe i did not make myself clear 
enough?

If you configure the PIC in per port mode/level, you can different modes per 
port and hence have 2x100GE and 1x40/4x10GE on the same PIC. See the first part 
of my last mail.
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level

If you configure the PIC in PIC speed mode/level 100GE you will only have 
2x100GE and nothing else per pic, as described in the second part of my last 
mail
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-pic-level

The later could make sense if you use e.g. SCBE2 and want 2+1/1+1 mode and not 3+0/2+0 to 
reduce the level of card to fabric/slot "oversub"as you only get up to 480G per 
Slot on that SCBE2 if all boards are active at the same time.

--
Kind Regards
Tobias Heister
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Vincentz Petzholtz
Hi there,

That’s not true and you sad it yourself. You have 240G per PIC.
With 2x100G ports enabled you can still set one remaining port to 40G on the 
same pic.
And it also works just fine.

Best regards,
Vincentz

> Am 06.05.2020 um 19:47 schrieb Tobias Heister :
> 
> On 06.05.2020 18:24, Brian Johnson wrote:
>> A wise man once told me… “Just because you can do something, doesn’t mean 
>> you should”. More specific, “Just because you can do it in the Junos config, 
>> doesn’t mean it’s supported.” Junipers licensing “honor system” required 
>> honorable intentions. ;)
> 
> I would say it is supported. Even the documentation has an example where one 
> port of the group is 100GE and two others are 10GE:
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level
> 
> Also with MPC7 its not like its honor based to only use 240G per PFE ... its 
> a hard limit ;)
> 
> If you run in PIC Mode with 100GE set, than in deed the other ports are 
> disabled:
> "For example, if you choose to configure PIC 0 at 100-Gbps speed, only ports 
> 2 and 5 of PIC 0 operate at 100-Gbps speed, while the other ports of the PIC 
> are disabled."
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-on-mpc7e-multi-rate-to-enable-different-port-speeds
> 
> I mean what else should it do, there are only two 100GE Ports per PFE anyway 
> ;)
> 
> --
> Kind Regards
> Tobias Heister
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



signature.asc
Description: Message signed with OpenPGP
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Tobias Heister

On 06.05.2020 18:24, Brian Johnson wrote:

A wise man once told me… “Just because you can do something, doesn’t mean you 
should”. More specific, “Just because you can do it in the Junos config, 
doesn’t mean it’s supported.” Junipers licensing “honor system” required 
honorable intentions. ;)


I would say it is supported. Even the documentation has an example where one 
port of the group is 100GE and two others are 10GE:
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-at-port-level

Also with MPC7 its not like its honor based to only use 240G per PFE ... its a 
hard limit ;)

If you run in PIC Mode with 100GE set, than in deed the other ports are 
disabled:
"For example, if you choose to configure PIC 0 at 100-Gbps speed, only ports 2 and 5 
of PIC 0 operate at 100-Gbps speed, while the other ports of the PIC are disabled."
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html#id-configuring-rate-selectability-on-mpc7e-multi-rate-to-enable-different-port-speeds

I mean what else should it do, there are only two 100GE Ports per PFE anyway ;)

--
Kind Regards
Tobias Heister
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Tobias Heister

Hi,

On 06.05.2020 18:03, Chris Wopat wrote:

On Wed, May 6, 2020 at 9:41 AM Brian Johnson  wrote:


So you have a 4x10G breakout and a 100G QSFP28 in the same group of 3 
interfaces and they are all working? Just because I can install and configure 
the optics, doesn’t mean they will function. This would conflict with what is 
coming from Juniper Product teams.

To be clear, I realize that the ports do not “disappear” because you insert the 
QSP28 into the port group, just that they will not work. :)


We've been this with MPC7s, works fine. You can squeeze the 240g out
of each PIC just fine, you simply cannot oversub.

 fpc 7 {
 pic 0 {
 port 2 {
 speed 100g;
 }
 port 4 {
 speed 10g;
 }
 port 5 {
 speed 100g;
 }
 }
 }

ports 4 and 5 in same 'group of 3', et-7/05 up at 100g and
xe-7/0/4:[0-3] up at 10g.


I always wondered whether there is an explicit knob to disable a port in order 
to prevent accidental wrong configs or transceivers inserts down the road. Of 
course you can annotate the existing ports or the pic, but besides that. Also 
what happens if somebody plugs in a transceiver into any of the remaining 
ports? Will the setup just fall apart?

You have 6 Ports per PFE and if you do 100GE on two of them you will end up with 
something similar to the above config (you can choose whether to do 40 or 10GE on one of 
the ports).  Which leaves three interfaces unconfigured or not listed in the config. In 
fact whenever one port is configured to 100G you will "loose" at least one of 
the ports and have to leave it not listed in config for things to work.

If at some point in the future somebody configures any of the remaining ports 
for an invalid  speed it will not work. Even worse default mode for MPC7E-MRATE 
is to fallback to 10GE Mode on all ports on invalid config which could kill 
your 100GE production ports. Luckily you have to bounce the PFE for speed 
changes, which could be even worse if your wrong config hits you during your 
next reboot if you do not mind the alarms ;)

"If rate selectability is not configured or if invalid port speeds are configured, 
each port operates as four 10-Gigabit Ethernet interface"
"When you change an existing port speed configuration at the port level, you must 
reset the MPC7E-MRATE PIC for the configuration to take effect. An alarm is generated 
indicating the change in port speed configuration."

https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html

So it would be great to have a config option to explicitly disable specific 
ports and not just leave them unconfigured. Of course you can also misconfigure 
any of the disabled ports into a unsupported speed combo, but it would be a bit 
more visible that they are disabled by intention.

You probably could configure all ports and literally "deactivate" the configs 
that you do not want to be enabled and annotate that, but it feels a bit clunky.

Especially on boxes like MX204 and MX10003 we would always explicitly configure 
the ports into a valid config combination to prevent somebody from putting in 
transceivers and the box trying to be smart and mess up your ports. I think you 
cannot easily do that on the MPC7

--
Kind Regards
Tobias Heister
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Brian Johnson
A wise man once told me… “Just because you can do something, doesn’t mean you 
should”. More specific, “Just because you can do it in the Junos config, 
doesn’t mean it’s supported.” Junipers licensing “honor system” required 
honorable intentions. ;)

Have fun!

- Brian

> On May 6, 2020, at 11:03 AM, Chris Wopat  wrote:
> 
> On Wed, May 6, 2020 at 9:41 AM Brian Johnson  wrote:
>> 
>> So you have a 4x10G breakout and a 100G QSFP28 in the same group of 3 
>> interfaces and they are all working? Just because I can install and 
>> configure the optics, doesn’t mean they will function. This would conflict 
>> with what is coming from Juniper Product teams.
>> 
>> To be clear, I realize that the ports do not “disappear” because you insert 
>> the QSP28 into the port group, just that they will not work. :)
> 
> We've been this with MPC7s, works fine. You can squeeze the 240g out
> of each PIC just fine, you simply cannot oversub.
> 
>fpc 7 {
>pic 0 {
>port 2 {
>speed 100g;
>}
>port 4 {
>speed 10g;
>}
>port 5 {
>speed 100g;
>}
>}
>}
> 
> ports 4 and 5 in same 'group of 3', et-7/05 up at 100g and
> xe-7/0/4:[0-3] up at 10g.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Chris Wopat
On Wed, May 6, 2020 at 9:41 AM Brian Johnson  wrote:
>
> So you have a 4x10G breakout and a 100G QSFP28 in the same group of 3 
> interfaces and they are all working? Just because I can install and configure 
> the optics, doesn’t mean they will function. This would conflict with what is 
> coming from Juniper Product teams.
>
> To be clear, I realize that the ports do not “disappear” because you insert 
> the QSP28 into the port group, just that they will not work. :)

We've been this with MPC7s, works fine. You can squeeze the 240g out
of each PIC just fine, you simply cannot oversub.

fpc 7 {
pic 0 {
port 2 {
speed 100g;
}
port 4 {
speed 10g;
}
port 5 {
speed 100g;
}
}
}

ports 4 and 5 in same 'group of 3', et-7/05 up at 100g and
xe-7/0/4:[0-3] up at 10g.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread sthaug
> So you have a 4x10G breakout and a 100G QSFP28 in the same group of 3 
> interfaces and they are all working? Just because I can install and configure 
> the optics, doesn’t mean they will function. This would conflict with what is 
> coming from Juniper Product teams.

Yes. But the speed must be configured at the port level for this to
work. This is the information we got from our Juniper product team.

See

https://www.juniper.net/documentation/en_US/junos/topics/topic-map/rate-selectability-configuring.html

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Brian Johnson
So you have a 4x10G breakout and a 100G QSFP28 in the same group of 3 
interfaces and they are all working? Just because I can install and configure 
the optics, doesn’t mean they will function. This would conflict with what is 
coming from Juniper Product teams.

To be clear, I realize that the ports do not “disappear” because you insert the 
QSP28 into the port group, just that they will not work. :)

- Brian

> On May 6, 2020, at 9:14 AM, sth...@nethelp.no wrote:
> 
>> 2. If you put a 100G optic in the QSFP28 port, the other 2 QSFP ports are 
>> not available. So 100G per group with a QSFP28 in them. Assuming only 100G 
>> QSFPs in use, the card will do 400G total.
> 
> Yes. But you can have 8 x 10G in addition in the form of 2 x 40G with
> breakout. This is from one of our routers with an MPC7E-MRATE card:
> 
> xe-3/0/0:0  down  down
> xe-3/0/0:1  down  down
> xe-3/0/0:2  down  down
> xe-3/0/0:3  down  down
> et-3/0/2upup
> et-3/0/5upup
> xe-3/1/0:0  down  down
> xe-3/1/0:1  down  down
> xe-3/1/0:2  down  down
> xe-3/1/0:3  down  down
> et-3/1/2upup
> et-3/1/5updown
> 
> and the corresponding chassis config is:
> 
> pic 0 {
>port 0 {
>speed 10g;
>}
>port 2 {
>speed 100g;
>}
>port 5 {
>speed 100g;
>}
> }
> pic 1 {
>port 0 {
>speed 10g;
>}
>port 2 {
>speed 100g;
>}
>port 5 {
>speed 100g;
>}
> }
> 
> Steinar Haug, Nethelp consulting, sth...@nethelp.no

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread sthaug
> 2. If you put a 100G optic in the QSFP28 port, the other 2 QSFP ports are not 
> available. So 100G per group with a QSFP28 in them. Assuming only 100G QSFPs 
> in use, the card will do 400G total.

Yes. But you can have 8 x 10G in addition in the form of 2 x 40G with
breakout. This is from one of our routers with an MPC7E-MRATE card:

xe-3/0/0:0  down  down
xe-3/0/0:1  down  down
xe-3/0/0:2  down  down
xe-3/0/0:3  down  down
et-3/0/2upup
et-3/0/5upup
xe-3/1/0:0  down  down
xe-3/1/0:1  down  down
xe-3/1/0:2  down  down
xe-3/1/0:3  down  down
et-3/1/2upup
et-3/1/5updown

and the corresponding chassis config is:

pic 0 {
port 0 {
speed 10g;
}
port 2 {
speed 100g;
}
port 5 {
speed 100g;
}
}
pic 1 {
port 0 {
speed 10g;
}
port 2 {
speed 100g;
}
port 5 {
speed 100g;
}
}

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Brian Johnson
FYI this is a more complex question….

From my understanding, you need to look at the card as 4 groups of 3 ports (2 
QSFP+ and 1 QSFP28). Here are your options:

1. All 3 ports can be used at 40G or 4x10G in any combination. So 120G per 
group or 480G per card. This is the maximum for the card.
2. If you put a 100G optic in the QSFP28 port, the other 2 QSFP ports are not 
available. So 100G per group with a QSFP28 in them. Assuming only 100G QSFPs in 
use, the card will do 400G total.

You can mix what each group does to get anywhere from 400G to 480G of capacity 
from the card.

Correct me if you know better.

- Brian

> On May 6, 2020, at 7:06 AM, sth...@nethelp.no wrote:
> 
>> we have some MX-Routers (MX480 and MX960) with MPC7E-MRATE
>> linecards. As
>> far as i know, one PFE supports 240G each. Is it possible to use both
>> 100G ports and in addititon 14x 10G ports on a single PFE  ?
>> With the following configuration, i got an error "FPC 5 PIC 1 Invalid
>> port profile configuration":
> 
> You can't oversubscribe the capacity of the MPC7 card. See
> 
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/preventing-oversubscription-active-physical-ports.html#id-supported-active-physical-ports-for-configuring-rate-selectability-to-prevent
> 
> for permitted port configurations. Note "Oversubscription of Packet
> Forwarding Engine capacity is not supported."
> 
> Steinar Haug, Nethelp consulting, sth...@nethelp.no
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Alex D.

Many thanks Charlie. It seems that calculation doesn't belong to my main
skills ;-)
Please forgot my questions...

Regards,
Alex



14x 10 = 140
2x 1C = 200

total 340




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Charlie Allom
14x 10 = 140
2x 1C = 200

total 340


On Wed, May 6, 2020 at 2:08 PM Alex D.  wrote:

> Okay, i tried with "number-of-ports" on pic-level instead. But
> unfortunately, "number-of-sub-ports 2" is ignored now
>
> fpc 5 {
>  pic 0 {
>  pic-mode 100G;
>  }
>  pic 1 {
>  number-of-ports 5;
>  port 1 {
>  number-of-sub-ports 2;
>  }
>  }
> }
>
> Are 100g ports only recognised with a transceiver plugged in ? I am
> missing the 100g ports in the output of "show int terse"
>
> router# show interfaces terse | match "xe-5/1|et-5/1" | except 16386 |
> except "\.0"
> xe-5/1/0:0  updown
> xe-5/1/0:1  updown
> xe-5/1/0:2  updown
> xe-5/1/0:3  updown
> xe-5/1/1:0  updown
> xe-5/1/1:1  updown
> xe-5/1/1:2  updown
> xe-5/1/1:3  updown
> xe-5/1/2:0  updown
> xe-5/1/2:1  updown
> xe-5/1/2:2  updown
> xe-5/1/2:3  updown
> xe-5/1/3:0  upup
> xe-5/1/3:1  upup
> xe-5/1/3:2  upup
> xe-5/1/3:3  upup
> xe-5/1/4:0  updown
> xe-5/1/4:1  updown
> xe-5/1/4:2  updown
> xe-5/1/4:3  updown
>
>
>  >You can't oversubscribe the capacity of the MPC7 card.
> Yes, i know. In my setup, i would use 14x 10G + 2x100G on pic 1 in
> maximum which sums up to 240G, means no oversubscription.
>
>
> Regards,
> Alex
>
>
> >> we have some MX-Routers (MX480 and MX960) with MPC7E-MRATE
> >> linecards. As
> >> far as i know, one PFE supports 240G each. Is it possible to use both
> >> 100G ports and in addititon 14x 10G ports on a single PFE  ?
> >> With the following configuration, i got an error "FPC 5 PIC 1 Invalid
> >> port profile configuration":
> > You can't oversubscribe the capacity of the MPC7 card. See
> >
> >
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/preventing-oversubscription-active-physical-ports.html#id-supported-active-physical-ports-for-configuring-rate-selectability-to-prevent
> >
> > for permitted port configurations. Note "Oversubscription of Packet
> > Forwarding Engine capacity is not supported."
> >
> > Steinar Haug, Nethelp consulting, sth...@nethelp.no
> >
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Alex D.

Okay, i tried with "number-of-ports" on pic-level instead. But
unfortunately, "number-of-sub-ports 2" is ignored now

fpc 5 {
pic 0 {
pic-mode 100G;
}
pic 1 {
number-of-ports 5;
port 1 {
number-of-sub-ports 2;
}
}
}

Are 100g ports only recognised with a transceiver plugged in ? I am
missing the 100g ports in the output of "show int terse"

router# show interfaces terse | match "xe-5/1|et-5/1" | except 16386 |
except "\.0"
xe-5/1/0:0  updown
xe-5/1/0:1  updown
xe-5/1/0:2  updown
xe-5/1/0:3  updown
xe-5/1/1:0  updown
xe-5/1/1:1  updown
xe-5/1/1:2  updown
xe-5/1/1:3  updown
xe-5/1/2:0  updown
xe-5/1/2:1  updown
xe-5/1/2:2  updown
xe-5/1/2:3  updown
xe-5/1/3:0  upup
xe-5/1/3:1  upup
xe-5/1/3:2  upup
xe-5/1/3:3  upup
xe-5/1/4:0  updown
xe-5/1/4:1  updown
xe-5/1/4:2  updown
xe-5/1/4:3  updown


>You can't oversubscribe the capacity of the MPC7 card.
Yes, i know. In my setup, i would use 14x 10G + 2x100G on pic 1 in
maximum which sums up to 240G, means no oversubscription.


Regards,
Alex



we have some MX-Routers (MX480 and MX960) with MPC7E-MRATE
linecards. As
far as i know, one PFE supports 240G each. Is it possible to use both
100G ports and in addititon 14x 10G ports on a single PFE  ?
With the following configuration, i got an error "FPC 5 PIC 1 Invalid
port profile configuration":

You can't oversubscribe the capacity of the MPC7 card. See

https://www.juniper.net/documentation/en_US/junos/topics/topic-map/preventing-oversubscription-active-physical-ports.html#id-supported-active-physical-ports-for-configuring-rate-selectability-to-prevent

for permitted port configurations. Note "Oversubscription of Packet
Forwarding Engine capacity is not supported."

Steinar Haug, Nethelp consulting, sth...@nethelp.no



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DDOS_PROTOCOL_VIOLATION on DHCP - and it's not configured?

2020-05-06 Thread Michael Hare via juniper-nsp
--- Begin Message ---
If you are absolutely certain you are not providing DHCP you could always set 
the punt rate to 1 and disable logging. 

Beware, this can be an awfully sharp sword.  Ask me how I know!

system {
ddos-protection {
protocols {
{$protocol} {
aggregate {
bandwidth 1;
burst 1;
flow-level-detection {
subscriber off;
logical-interface off;
}
no-flow-logging;

-Michael

> -Original Message-
> From: juniper-nsp  On Behalf Of
> Mike
> Sent: Tuesday, May 5, 2020 1:32 PM
> To: juniper-nsp@puck.nether.net
> Subject: [j-nsp] DDOS_PROTOCOL_VIOLATION on DHCP - and it's not
> configured?
> 
> Hello,
> 
>     On my MX240, I occasionally get log messages of this type:
> 
> May  4 20:47:38  jmx240-fmt2 jddosd[3549]:
> DDOS_PROTOCOL_VIOLATION_SET:
> Warning: Host-bound traffic for protocol/exception  DHCPv4:bad-packets
> exceeded its allowed bandwidth at fpc 1 for 417 times, started at
> 2020-05-04 20:47:37 PDT
> May  4 20:52:55  jmx240-fmt2 jddosd[3549]:
> DDOS_PROTOCOL_VIOLATION_CLEAR: INFO: Host-bound traffic for
> protocol/exception DHCPv4:bad-packets has returned to normal. Its
> allowed bandwith was exceeded at fpc 1 for 417 times, from 2020-05-04
> 20:47:37 PDT to 2020-05-04 20:47:50 PDT
> 
>     I have looked at my config, and I am positively not providing dhcp
> service of any kind, have no dhcp relay service on the router
> configured, and simply fail to see how or why these messages are being
> triggered. I do have some virtual hosts that are acting as dhcp servers
> for relayed dhcp traffic, but at the point my router sees this traffic
> its only udp port 67 traffic being forwarded to these servers from my
> far away dhcp clients.
> 
>     I almost want to say that, despite config, the router is in fact
> keying into relayed dhcp traffic for some reason. Wondering how I would
> go about more properly diagnosing this problem?
> 
> 
> Thank you.
> 
> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread sthaug
> we have some MX-Routers (MX480 and MX960) with MPC7E-MRATE
> linecards. As
> far as i know, one PFE supports 240G each. Is it possible to use both
> 100G ports and in addititon 14x 10G ports on a single PFE  ?
> With the following configuration, i got an error "FPC 5 PIC 1 Invalid
> port profile configuration":

You can't oversubscribe the capacity of the MPC7 card. See

https://www.juniper.net/documentation/en_US/junos/topics/topic-map/preventing-oversubscription-active-physical-ports.html#id-supported-active-physical-ports-for-configuring-rate-selectability-to-prevent

for permitted port configurations. Note "Oversubscription of Packet
Forwarding Engine capacity is not supported."

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Rate selectability on MPC7E-MRATE

2020-05-06 Thread Alex D.

Hi,

we have some MX-Routers (MX480 and MX960) with MPC7E-MRATE linecards. As
far as i know, one PFE supports 240G each. Is it possible to use both
100G ports and in addititon 14x 10G ports on a single PFE  ?
With the following configuration, i got an error "FPC 5 PIC 1 Invalid
port profile configuration":

fpc 5 {
pic 0 {
pic-mode 100G;
}
pic 1 {
port 1 {
number-of-sub-ports 2;
speed 10g;
}
port 2 {
speed 100g;
}
port 3 {
speed 10g;
}
port 4 {
speed 10g;
}
port 5 {
speed 10g;
}
}
}

Thanks in advance for your replies.
Regards,
Alex
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp