Re: [j-nsp] M10i

2013-04-11 Thread Per Granath
https://www.juniper.net/techpubs/en_US/release-independent/junos/topics/reference/general/mic-mx-series-supported.html#toc-table-mics-mx80


-Original Message-
From: juniper-nsp-boun...@puck.nether.net 
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of joel jaeggli
Sent: Thursday, April 11, 2013 7:58 AM
To: nsp-juniper
Subject: Re: [j-nsp] M10i

On 4/10/13 5:45 PM, Chris Adams wrote:
 Once upon a time, Correa Adolfo acor...@mcmtelecom.com.mx said:
 I tought MX series were purely ethernet.
 I think that was true initially, but (for example) there are MX5-80 
 MICs to handle circuits from T1 up to OC192.

http://www.juniper.net/us/en/local/pdf/datasheets/1000378-en.pdf
___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i

2013-04-10 Thread Per Granath
Yes.

-Original Message-
From: juniper-nsp-boun...@puck.nether.net 
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Ahmad Alhady
Sent: Wednesday, April 10, 2013 12:38 PM
To: Michel de Nostredame
Cc: nsp-juniper
Subject: Re: [j-nsp] M10i

But does MX80 support SDH ?


On Wed, Apr 10, 2013 at 11:30 AM, Michel de Nostredame
d.nos...@gmail.comwrote:

 Ah~ M20 does not support 10GE interface, also M20 is already EOL.
 MX could be good choice, see

 http://www.juniper.net/us/en/local/pdf/datasheets/1000378-en.pdf
 for MX80 cards, and

 http://www.juniper.net/us/en/products-services/routing/mx-series/
 for all MX series.

 --
 Michel~

 On Tue, Apr 9, 2013 at 7:07 PM, Jonathan Lassoff j...@thejof.com wrote:
  I think you'll need at least an M20 for your 10 GigE requirement as 
  well
 as
  SDH.
 
  If you can somehow get a different transit circuit than your SDH 
  one, an
  MX5 would be a much closer (throughput-wise) and better
 bang-for-your-buck
  replacement for a 7206 than an M-series.
  J-series with a T1 module could also work, depending on your STM-1.
 
  If you need SDH though, you'll need M or T. J can do T1s.
 
  --j
 
  On Tue, Apr 9, 2013 at 6:28 PM, Orlando Cordova Gonzales  
  orlando.cordova.gonza...@gmail.com wrote:
 
  hello,
 
  I need to change a CISCO 7206 router that computer I recommend one 
  of
 the
  requirements is that you have two 10G interfacez two interfacez 1G 
  and
 STM1
  interface to connect with the ISP was thinking M10i Router but I do 
  not support 10g interface.
 
  thank you very much for your help.
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i

2013-04-10 Thread Correa Adolfo
I tought MX series were purely ethernet.

-Original Message-
From: juniper-nsp-boun...@puck.nether.net 
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Per Granath
Sent: miércoles, 10 de abril de 2013 06:31 a.m.
To: Ahmad Alhady; Michel de Nostredame
Cc: nsp-juniper
Subject: Re: [j-nsp] M10i

Yes.

-Original Message-
From: juniper-nsp-boun...@puck.nether.net 
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Ahmad Alhady
Sent: Wednesday, April 10, 2013 12:38 PM
To: Michel de Nostredame
Cc: nsp-juniper
Subject: Re: [j-nsp] M10i

But does MX80 support SDH ?


On Wed, Apr 10, 2013 at 11:30 AM, Michel de Nostredame
d.nos...@gmail.comwrote:

 Ah~ M20 does not support 10GE interface, also M20 is already EOL.
 MX could be good choice, see

 http://www.juniper.net/us/en/local/pdf/datasheets/1000378-en.pdf
 for MX80 cards, and

 http://www.juniper.net/us/en/products-services/routing/mx-series/
 for all MX series.

 --
 Michel~

 On Tue, Apr 9, 2013 at 7:07 PM, Jonathan Lassoff j...@thejof.com wrote:
  I think you'll need at least an M20 for your 10 GigE requirement as 
  well
 as
  SDH.
 
  If you can somehow get a different transit circuit than your SDH 
  one, an
  MX5 would be a much closer (throughput-wise) and better
 bang-for-your-buck
  replacement for a 7206 than an M-series.
  J-series with a T1 module could also work, depending on your STM-1.
 
  If you need SDH though, you'll need M or T. J can do T1s.
 
  --j
 
  On Tue, Apr 9, 2013 at 6:28 PM, Orlando Cordova Gonzales  
  orlando.cordova.gonza...@gmail.com wrote:
 
  hello,
 
  I need to change a CISCO 7206 router that computer I recommend one 
  of
 the
  requirements is that you have two 10G interfacez two interfacez 1G 
  and
 STM1
  interface to connect with the ISP was thinking M10i Router but I do 
  not support 10g interface.
 
  thank you very much for your help.
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp



La información en este correo electrónico y sus anexos es confidencial y 
privilegiada. Está dirigida exclusivamente a sus destinatarios y por lo tanto 
nadie más está autorizado a tener acceso a élla. Si Ud. no es el destinatario, 
es ilícito imprimirla, reproducirla o distribuirla. Si lo recibió por error, 
por favor avise al remitente y borre cualquier registro en sus sistemas.

CONFIDENTIALITY NOTICE: This email message and its attachments, if any, are 
intended only for the person or entity to which it is addressed and contains 
privileged information. Any use, printing, disclosure, or distribution of such 
information without the written authorization is prohibited. If you are not the 
intended recipient, please contact the sender and destroy all copies of the 
original message.

Nuestro aviso de privacidad está publicado en la página web: 
http://www.mcmtelecom.com.mx/common/politica_privacidad.htm



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i

2013-04-10 Thread joel jaeggli

On 4/10/13 5:45 PM, Chris Adams wrote:

Once upon a time, Correa Adolfo acor...@mcmtelecom.com.mx said:

I tought MX series were purely ethernet.

I think that was true initially, but (for example) there are MX5-80 MICs
to handle circuits from T1 up to OC192.


http://www.juniper.net/us/en/local/pdf/datasheets/1000378-en.pdf
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M10i

2013-04-09 Thread Orlando Cordova Gonzales
hello,

I need to change a CISCO 7206 router that computer I recommend one of the
requirements is that you have two 10G interfacez two interfacez 1G and STM1
interface to connect with the ISP was thinking M10i Router but I do not
support 10g interface.

thank you very much for your help.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i

2013-04-09 Thread Jonathan Lassoff
I think you'll need at least an M20 for your 10 GigE requirement as well as
SDH.

If you can somehow get a different transit circuit than your SDH one, an
MX5 would be a much closer (throughput-wise) and better bang-for-your-buck
replacement for a 7206 than an M-series.
J-series with a T1 module could also work, depending on your STM-1.

If you need SDH though, you'll need M or T. J can do T1s.

--j

On Tue, Apr 9, 2013 at 6:28 PM, Orlando Cordova Gonzales 
orlando.cordova.gonza...@gmail.com wrote:

 hello,

 I need to change a CISCO 7206 router that computer I recommend one of the
 requirements is that you have two 10G interfacez two interfacez 1G and STM1
 interface to connect with the ISP was thinking M10i Router but I do not
 support 10g interface.

 thank you very much for your help.
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-26 Thread Matt Bentley
Thanks very much.  So confirm, there is nothing that says one PIC will get
a certain amount at minimum right?

What determines what is dropped when there is contention on the bus?  Are
there any commands I could use to see whether a bus is/was congested and
how much of what was dropped?

On Sat, Feb 23, 2013 at 9:37 PM, Christopher E. Brown 
chris.br...@acsalaska.net wrote:


 Bus is _shared_, with CFEB you have guaranteed 3.2Gbit shared by up to 4
 PICs, with E-CFEB is non issue single PIC limit is 1G and E-CFEB will do
 full 1G per no matter what.

 If you try to handle more than 3.2Gbit on a CFEB bus (X-0/X/X or
 X-1/X/X) you may see bus contention depending on packet size.

 Load 4xGE and maybe.  Load 3xGE + 4xDS3 is pushing limit but OK.

 With E-CFEB, non issue.

 With CFEB summ the bandwith make make sure is 3200Mbit or less, and 3200
 is shared by all 4 PICs.


 On 2/23/13 6:51 PM, Matt Bentley wrote:
  Thanks!  So it would be correct to say you should NEVER see
  oversubscription on a channelized DS3 card right?  Obviously, you can
  overdrive a single T1, but you'd never see drops due to the PIC itself?
   I guess what I'm asking is whether or not the bandwidth availalble on a
  FPC is allocated equally per PIC, or if everyone sort of shares it.
 
  On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown
  chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:
 
 
  With the std cfeb after internal overhead per bus capacity is
 3.2Gbit of
  traffic, this is worst case minimum small packets, etc.
 
  Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.
 
  Unless you are doing all small packet, actual limit is higher than
 3.2.
 
  Enhanced CFEB bumps the raw bus capacity to something around 5Gbit,
 and
  the after all overheads forwarding capacity to 4Gbit (based on the 1G
  per PIC limit).
 
  Summ...
 
  CFEB
  Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small
 packet_
 
  E-CFEB
  Up to 1Gbit per PIC
 
 
  These figures are
  On 2/23/13 6:01 PM, Matt Bentley wrote:
   OK - so there has been a lot of discussion around this that I've
  seen, but
   I've searched for hours and still can't find concrete answers.
   Can someone
   help?
  
   1.  Does the 3.2 Gbps throughput limitation include overhead?  In
  other
   words, Is the raw throughput 4 Gbps with effective throughput of
 3.2
   Gbps?  Or is it 3.2 Gbps of raw throughput with effective
  throughput of 2.5
   Gbps?
   2.  Is this throughput per PIC on the FPC?  So let's say I have
  three 4x
   GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
  bandwidth
   allocated equally between them?  So is it 800 Mbps per PIC, and
  the PICs
   can't steal bandwidth from another one?
   3.  Where, and based on what, is traffic dropped with  Juniper
  head of line
   blocking (ie where multiple high speed input interfaces try to go
  out the
   same lower speed exit interface)?
  
   Thanks very much!
   ___
   juniper-nsp mailing list juniper-nsp@puck.nether.net
  mailto:juniper-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 
 
  --
 
 
  Christopher E. Brown   chris.br...@acsalaska.net
  mailto:chris.br...@acsalaska.net   desk (907) 550-8393
  tel:%28907%29%20550-8393
   cell (907)
  632-8492 tel:%28907%29%20632-8492
  IP Engineer - ACS
 
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  mailto:juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 


 --
 
 Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
  cell (907) 632-8492
 IP Engineer - ACS
 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-26 Thread Christopher E. Brown

If you oversubscribe the bus you oversubscribe the bus.  Traffic load
v.s. average packet size v.s. level of oversubscrip v.s. burstiness will
determine loss potential.

On 2/26/13 11:59 AM, Matt Bentley wrote:
 Thanks very much.  So confirm, there is nothing that says one PIC will
 get a certain amount at minimum right?  
 
 What determines what is dropped when there is contention on the bus?
  Are there any commands I could use to see whether a bus is/was
 congested and how much of what was dropped?
 
 On Sat, Feb 23, 2013 at 9:37 PM, Christopher E. Brown
 chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:
 
 
 Bus is _shared_, with CFEB you have guaranteed 3.2Gbit shared by up to 4
 PICs, with E-CFEB is non issue single PIC limit is 1G and E-CFEB will do
 full 1G per no matter what.
 
 If you try to handle more than 3.2Gbit on a CFEB bus (X-0/X/X or
 X-1/X/X) you may see bus contention depending on packet size.
 
 Load 4xGE and maybe.  Load 3xGE + 4xDS3 is pushing limit but OK.
 
 With E-CFEB, non issue.
 
 With CFEB summ the bandwith make make sure is 3200Mbit or less, and 3200
 is shared by all 4 PICs.
 
 
 On 2/23/13 6:51 PM, Matt Bentley wrote:
  Thanks!  So it would be correct to say you should NEVER see
  oversubscription on a channelized DS3 card right?  Obviously, you can
  overdrive a single T1, but you'd never see drops due to the PIC
 itself?
   I guess what I'm asking is whether or not the bandwidth
 availalble on a
  FPC is allocated equally per PIC, or if everyone sort of shares it.
 
  On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown
  chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net wrote:
 
 
  With the std cfeb after internal overhead per bus capacity is
 3.2Gbit of
  traffic, this is worst case minimum small packets, etc.
 
  Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.
 
  Unless you are doing all small packet, actual limit is higher
 than 3.2.
 
  Enhanced CFEB bumps the raw bus capacity to something around
 5Gbit, and
  the after all overheads forwarding capacity to 4Gbit (based on
 the 1G
  per PIC limit).
 
  Summ...
 
  CFEB
  Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small
 packet_
 
  E-CFEB
  Up to 1Gbit per PIC
 
 
  These figures are
  On 2/23/13 6:01 PM, Matt Bentley wrote:
   OK - so there has been a lot of discussion around this that I've
  seen, but
   I've searched for hours and still can't find concrete answers.
   Can someone
   help?
  
   1.  Does the 3.2 Gbps throughput limitation include
 overhead?  In
  other
   words, Is the raw throughput 4 Gbps with effective
 throughput of 3.2
   Gbps?  Or is it 3.2 Gbps of raw throughput with effective
  throughput of 2.5
   Gbps?
   2.  Is this throughput per PIC on the FPC?  So let's say I have
  three 4x
   GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
  bandwidth
   allocated equally between them?  So is it 800 Mbps per PIC, and
  the PICs
   can't steal bandwidth from another one?
   3.  Where, and based on what, is traffic dropped with  Juniper
  head of line
   blocking (ie where multiple high speed input interfaces try
 to go
  out the
   same lower speed exit interface)?
  
   Thanks very much!
   ___
   juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  mailto:juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 
 
  --
 
 
  Christopher E. Brown   chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net
  mailto:chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net   desk (907) 550-8393
 tel:%28907%29%20550-8393
  tel:%28907%29%20550-8393
   cell (907)
  632-8492 tel:%28907%29%20632-8492
  IP Engineer - ACS
 
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  mailto:juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  

[j-nsp] M10i FPC PIC Throughput Questions

2013-02-23 Thread Matt Bentley
OK - so there has been a lot of discussion around this that I've seen, but
I've searched for hours and still can't find concrete answers.  Can someone
help?

1.  Does the 3.2 Gbps throughput limitation include overhead?  In other
words, Is the raw throughput 4 Gbps with effective throughput of 3.2
Gbps?  Or is it 3.2 Gbps of raw throughput with effective throughput of 2.5
Gbps?
2.  Is this throughput per PIC on the FPC?  So let's say I have three 4x
GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get bandwidth
allocated equally between them?  So is it 800 Mbps per PIC, and the PICs
can't steal bandwidth from another one?
3.  Where, and based on what, is traffic dropped with  Juniper head of line
blocking (ie where multiple high speed input interfaces try to go out the
same lower speed exit interface)?

Thanks very much!
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-23 Thread Christopher E. Brown

With the std cfeb after internal overhead per bus capacity is 3.2Gbit of
traffic, this is worst case minimum small packets, etc.

Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.

Unless you are doing all small packet, actual limit is higher than 3.2.

Enhanced CFEB bumps the raw bus capacity to something around 5Gbit, and
the after all overheads forwarding capacity to 4Gbit (based on the 1G
per PIC limit).

Summ...

CFEB
Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small packet_

E-CFEB
Up to 1Gbit per PIC


These figures are
On 2/23/13 6:01 PM, Matt Bentley wrote:
 OK - so there has been a lot of discussion around this that I've seen, but
 I've searched for hours and still can't find concrete answers.  Can someone
 help?
 
 1.  Does the 3.2 Gbps throughput limitation include overhead?  In other
 words, Is the raw throughput 4 Gbps with effective throughput of 3.2
 Gbps?  Or is it 3.2 Gbps of raw throughput with effective throughput of 2.5
 Gbps?
 2.  Is this throughput per PIC on the FPC?  So let's say I have three 4x
 GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get bandwidth
 allocated equally between them?  So is it 800 Mbps per PIC, and the PICs
 can't steal bandwidth from another one?
 3.  Where, and based on what, is traffic dropped with  Juniper head of line
 blocking (ie where multiple high speed input interfaces try to go out the
 same lower speed exit interface)?
 
 Thanks very much!
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-23 Thread Matt Bentley
Thanks!  So it would be correct to say you should NEVER see
oversubscription on a channelized DS3 card right?  Obviously, you can
overdrive a single T1, but you'd never see drops due to the PIC itself?  I
guess what I'm asking is whether or not the bandwidth availalble on a FPC
is allocated equally per PIC, or if everyone sort of shares it.

On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown 
chris.br...@acsalaska.net wrote:


 With the std cfeb after internal overhead per bus capacity is 3.2Gbit of
 traffic, this is worst case minimum small packets, etc.

 Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.

 Unless you are doing all small packet, actual limit is higher than 3.2.

 Enhanced CFEB bumps the raw bus capacity to something around 5Gbit, and
 the after all overheads forwarding capacity to 4Gbit (based on the 1G
 per PIC limit).

 Summ...

 CFEB
 Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small packet_

 E-CFEB
 Up to 1Gbit per PIC


 These figures are
 On 2/23/13 6:01 PM, Matt Bentley wrote:
  OK - so there has been a lot of discussion around this that I've seen,
 but
  I've searched for hours and still can't find concrete answers.  Can
 someone
  help?
 
  1.  Does the 3.2 Gbps throughput limitation include overhead?  In other
  words, Is the raw throughput 4 Gbps with effective throughput of 3.2
  Gbps?  Or is it 3.2 Gbps of raw throughput with effective throughput of
 2.5
  Gbps?
  2.  Is this throughput per PIC on the FPC?  So let's say I have three 4x
  GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
 bandwidth
  allocated equally between them?  So is it 800 Mbps per PIC, and the PICs
  can't steal bandwidth from another one?
  3.  Where, and based on what, is traffic dropped with  Juniper head of
 line
  blocking (ie where multiple high speed input interfaces try to go out the
  same lower speed exit interface)?
 
  Thanks very much!
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 


 --
 
 Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
  cell (907) 632-8492
 IP Engineer - ACS
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-23 Thread Christopher E. Brown

Bus is _shared_, with CFEB you have guaranteed 3.2Gbit shared by up to 4
PICs, with E-CFEB is non issue single PIC limit is 1G and E-CFEB will do
full 1G per no matter what.

If you try to handle more than 3.2Gbit on a CFEB bus (X-0/X/X or
X-1/X/X) you may see bus contention depending on packet size.

Load 4xGE and maybe.  Load 3xGE + 4xDS3 is pushing limit but OK.

With E-CFEB, non issue.

With CFEB summ the bandwith make make sure is 3200Mbit or less, and 3200
is shared by all 4 PICs.


On 2/23/13 6:51 PM, Matt Bentley wrote:
 Thanks!  So it would be correct to say you should NEVER see
 oversubscription on a channelized DS3 card right?  Obviously, you can
 overdrive a single T1, but you'd never see drops due to the PIC itself?
  I guess what I'm asking is whether or not the bandwidth availalble on a
 FPC is allocated equally per PIC, or if everyone sort of shares it.
 
 On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown
 chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:
 
 
 With the std cfeb after internal overhead per bus capacity is 3.2Gbit of
 traffic, this is worst case minimum small packets, etc.
 
 Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.
 
 Unless you are doing all small packet, actual limit is higher than 3.2.
 
 Enhanced CFEB bumps the raw bus capacity to something around 5Gbit, and
 the after all overheads forwarding capacity to 4Gbit (based on the 1G
 per PIC limit).
 
 Summ...
 
 CFEB
 Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small packet_
 
 E-CFEB
 Up to 1Gbit per PIC
 
 
 These figures are
 On 2/23/13 6:01 PM, Matt Bentley wrote:
  OK - so there has been a lot of discussion around this that I've
 seen, but
  I've searched for hours and still can't find concrete answers.
  Can someone
  help?
 
  1.  Does the 3.2 Gbps throughput limitation include overhead?  In
 other
  words, Is the raw throughput 4 Gbps with effective throughput of 3.2
  Gbps?  Or is it 3.2 Gbps of raw throughput with effective
 throughput of 2.5
  Gbps?
  2.  Is this throughput per PIC on the FPC?  So let's say I have
 three 4x
  GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
 bandwidth
  allocated equally between them?  So is it 800 Mbps per PIC, and
 the PICs
  can't steal bandwidth from another one?
  3.  Where, and based on what, is traffic dropped with  Juniper
 head of line
  blocking (ie where multiple high speed input interfaces try to go
 out the
  same lower speed exit interface)?
 
  Thanks very much!
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 
 --
 
 Christopher E. Brown   chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net   desk (907) 550-8393
 tel:%28907%29%20550-8393
  cell (907)
 632-8492 tel:%28907%29%20632-8492
 IP Engineer - ACS
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M10i with bras?

2012-12-10 Thread Chris Adams
Since somebody asked about MX5, I figured I'd ask about M10i...

We have a few hundred PPPoE DSL customers from ATT (old BellSouth
land), delivered to us over an ATM OC-3 (they won't deliver over
anything but ATM) carrying L2TP tunnels.  Right now, that's terminated
on some old EOL equipment, and I'd like to get them on something newer.
We have an M10i that is not doing a lot, and I think I have seen mention
of using that platform as an LNS.

Any comments?  Is this something that would work, or is it a case of
here be dragons?
-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i with bras?

2012-12-10 Thread sthaug
 We have a few hundred PPPoE DSL customers from ATT (old BellSouth
 land), delivered to us over an ATM OC-3 (they won't deliver over
 anything but ATM) carrying L2TP tunnels.  Right now, that's terminated
 on some old EOL equipment, and I'd like to get them on something newer.
 We have an M10i that is not doing a lot, and I think I have seen mention
 of using that platform as an LNS.
 
 Any comments?  Is this something that would work, or is it a case of
 here be dragons?

Not commenting on L2TP specifically:

Please not that the BRAS functionality (e.g.  forwarding-options
dhcp-relay) is *not* supported on M7i/M10i. It *is* supported on MX.

Having said that: We ran DHCP relay on M7i for a while, and it worked
for us. 

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i JUNOS Upgrade

2011-09-29 Thread Jonas Frey (Probe Networks)
I think it should be possible to upgrade to 10.x without CF. The M7i
initially came without CF.
If the box is not in production you could just try updating it.
Otherwise just buy a Sandisk 1/2GB CF on ebay for cheap money.

If you have redundant routing engines you need to upgrade both seperate.
This measn first upgrade RE0 with the install media and then RE1 (by
putting the install-media into RE1 and connecting the console cable to
RE1).

-Jonas

Am Mittwoch, den 28.09.2011, 23:24 +0300 schrieb Jake Jake:
 I do have 2 spare 256MB drams which would meet the requirement. But in
 most of the documentation in Juniper they mention a mandatory
 requirement of 1G compact flash. But currently I don't have a compact
 flash on the router. I can see only ad1s1 . I guess this is the hard
 disk on the router.  Will upgrade be still possible without the
 compact flash.
  
 Further if a install media is used , how would it work with redundant
 routing engine upgrades. 
  
 Cheers
 
 
 On Wed, Sep 28, 2011 at 11:12 PM, Jonas Frey (Probe Networks)
 j...@probe-networks.de wrote:
 Jake,
 
 as far as i know you need more than 512MB dram to go past
 JunOS 10.x.
 (I know there was a limitation but i dont recall where in
 detail).
 Any way with less than 768MB Ram you are asking for trouble
 with any
 modern JunOS.
 Best would be to upgrade your RE-5 to 768 MB which is the max.
 
 The RE-5 only comes with 256MB sticks, so you would only need
 to buy 1
 more. This will be fine if you buy them from juniper ($$$).
 If you are going the 3rd party route then it'll be better to
 buy 3x256MB
 sticks since otherwise the chip types wont match which could
 lead to
 problems. The cost for these is probably just a few dollars...
 
 512MB sticks only work on the RE-5+ aka RE-850.
 
 As for the upgrade: Get yourself a install media (or create
 one) and
 save yourself the trouble of going via various intermediate
 versions
 (also this would be alot faster).
 
 -Jonas
 
 
 Am Mittwoch, den 28.09.2011, 21:43 +0300 schrieb Jake Jake:
  Hi all,
 
  I am looking at upgrading the JUNOS on our M10i router.
 Current JUNOS
  platform is 6.3R1.3 . The router has redundant routing
 Engine  RE-5.0 with
  512MB DRAM . Also there is no compact flash on board only
 *ad1s1*. Can any
  one suggest on if I can upgrade the router to 11.1R5.4 with
 the current
  hardware specification .  Please advise on if a direct
 upgrade can be done
  as well from 6.3 to 11.1.
 
  Plus as I understand M10i has 3 DRAM slots. Is there any way
 of knowing the
  combination of RAM used ..i.e 256+256MB or a single 512MB
 RAM.
 
  Cheers
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 


signature.asc
Description: This is a digitally signed message part
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] M10i JUNOS Upgrade

2011-09-28 Thread Jake Jake
Hi all,

I am looking at upgrading the JUNOS on our M10i router. Current JUNOS
platform is 6.3R1.3 . The router has redundant routing Engine  RE-5.0 with
512MB DRAM . Also there is no compact flash on board only *ad1s1*. Can any
one suggest on if I can upgrade the router to 11.1R5.4 with the current
hardware specification .  Please advise on if a direct upgrade can be done
as well from 6.3 to 11.1.

Plus as I understand M10i has 3 DRAM slots. Is there any way of knowing the
combination of RAM used ..i.e 256+256MB or a single 512MB RAM.

Cheers
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i JUNOS Upgrade

2011-09-28 Thread Jeff Wheeler
On Wed, Sep 28, 2011 at 2:43 PM, Jake Jake 2012j...@gmail.com wrote:
 I am looking at upgrading the JUNOS on our M10i router. Current JUNOS
 platform is 6.3R1.3 . The router has redundant routing Engine  RE-5.0 with
 512MB DRAM . Also there is no compact flash on board only *ad1s1*. Can any
 one suggest on if I can upgrade the router to 11.1R5.4 with the current
 hardware specification .  Please advise on if a direct upgrade can be done
 as well from 6.3 to 11.1.

If you have DFZ routes you should upgrade the RAM to 768MB, or
alternatively, replace the router or buy more modern routing engines.
There is a big jump in memory usage in 8.x and if you have only 512MB
and are carrying Internet BGP routes, you will be using the swap and
the RE will perform badly.

No, you cannot do a direct upgrade from 6.3 to 11.1.  You'll be going
through quite a few intermediate software versions to do that.  It
will be easier to simply reinstall Junos from an 11.1 install-media
disk and then load your configuration.

 Plus as I understand M10i has 3 DRAM slots. Is there any way of knowing the
 combination of RAM used ..i.e 256+256MB or a single 512MB RAM.

I don't think the RE-5.0 will recognize more than 256MB per slot.

-- 
Jeff S Wheeler j...@inconcepts.biz
Sr Network Operator  /  Innovative Network Concepts

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i JUNOS Upgrade

2011-09-28 Thread James Jones
Just a tip I have found it always easier to backup everything and use the
jinstall file.





On Wed, Sep 28, 2011 at 3:06 PM, Jeff Wheeler j...@inconcepts.biz wrote:

 On Wed, Sep 28, 2011 at 2:43 PM, Jake Jake 2012j...@gmail.com wrote:
  I am looking at upgrading the JUNOS on our M10i router. Current JUNOS
  platform is 6.3R1.3 . The router has redundant routing Engine  RE-5.0
 with
  512MB DRAM . Also there is no compact flash on board only *ad1s1*. Can
 any
  one suggest on if I can upgrade the router to 11.1R5.4 with the current
  hardware specification .  Please advise on if a direct upgrade can be
 done
  as well from 6.3 to 11.1.

 If you have DFZ routes you should upgrade the RAM to 768MB, or
 alternatively, replace the router or buy more modern routing engines.
 There is a big jump in memory usage in 8.x and if you have only 512MB
 and are carrying Internet BGP routes, you will be using the swap and
 the RE will perform badly.

 No, you cannot do a direct upgrade from 6.3 to 11.1.  You'll be going
 through quite a few intermediate software versions to do that.  It
 will be easier to simply reinstall Junos from an 11.1 install-media
 disk and then load your configuration.

  Plus as I understand M10i has 3 DRAM slots. Is there any way of knowing
 the
  combination of RAM used ..i.e 256+256MB or a single 512MB RAM.

 I don't think the RE-5.0 will recognize more than 256MB per slot.

 --
 Jeff S Wheeler j...@inconcepts.biz
 Sr Network Operator  /  Innovative Network Concepts

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i JUNOS Upgrade

2011-09-28 Thread Jonas Frey (Probe Networks)
Jake,

as far as i know you need more than 512MB dram to go past JunOS 10.x.
(I know there was a limitation but i dont recall where in detail).
Any way with less than 768MB Ram you are asking for trouble with any
modern JunOS.
Best would be to upgrade your RE-5 to 768 MB which is the max.

The RE-5 only comes with 256MB sticks, so you would only need to buy 1
more. This will be fine if you buy them from juniper ($$$).
If you are going the 3rd party route then it'll be better to buy 3x256MB
sticks since otherwise the chip types wont match which could lead to
problems. The cost for these is probably just a few dollars...

512MB sticks only work on the RE-5+ aka RE-850.

As for the upgrade: Get yourself a install media (or create one) and
save yourself the trouble of going via various intermediate versions
(also this would be alot faster).

-Jonas


Am Mittwoch, den 28.09.2011, 15:27 -0400 schrieb James Jones:
 Just a tip I have found it always easier to backup everything and use the
 jinstall file.
 
 
 
 
 
 On Wed, Sep 28, 2011 at 3:06 PM, Jeff Wheeler j...@inconcepts.biz wrote:
 
  On Wed, Sep 28, 2011 at 2:43 PM, Jake Jake 2012j...@gmail.com wrote:
   I am looking at upgrading the JUNOS on our M10i router. Current JUNOS
   platform is 6.3R1.3 . The router has redundant routing Engine  RE-5.0
  with
   512MB DRAM . Also there is no compact flash on board only *ad1s1*. Can
  any
   one suggest on if I can upgrade the router to 11.1R5.4 with the current
   hardware specification .  Please advise on if a direct upgrade can be
  done
   as well from 6.3 to 11.1.
 
  If you have DFZ routes you should upgrade the RAM to 768MB, or
  alternatively, replace the router or buy more modern routing engines.
  There is a big jump in memory usage in 8.x and if you have only 512MB
  and are carrying Internet BGP routes, you will be using the swap and
  the RE will perform badly.
 
  No, you cannot do a direct upgrade from 6.3 to 11.1.  You'll be going
  through quite a few intermediate software versions to do that.  It
  will be easier to simply reinstall Junos from an 11.1 install-media
  disk and then load your configuration.
 
   Plus as I understand M10i has 3 DRAM slots. Is there any way of knowing
  the
   combination of RAM used ..i.e 256+256MB or a single 512MB RAM.
 
  I don't think the RE-5.0 will recognize more than 256MB per slot.
 
  --
  Jeff S Wheeler j...@inconcepts.biz
  Sr Network Operator  /  Innovative Network Concepts
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


signature.asc
Description: This is a digitally signed message part
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] M10i JUNOS Upgrade

2011-09-28 Thread Jake Jake
 I do have 2 spare 256MB drams which would meet the requirement. But in most
 of the documentation in Juniper they mention a mandatory requirement of 1G
 compact flash. But currently I don't have a compact flash on the router. I
 can see only ad1s1 . I guess this is the hard disk on the router.  Will
 upgrade be still possible without the compact flash.

 Further if a install media is used , how would it work with redundant
 routing engine upgrades.

 Cheers

 On Wed, Sep 28, 2011 at 11:12 PM, Jonas Frey (Probe Networks) 
 j...@probe-networks.de wrote:

 Jake,

 as far as i know you need more than 512MB dram to go past JunOS 10.x.
 (I know there was a limitation but i dont recall where in detail).
 Any way with less than 768MB Ram you are asking for trouble with any
 modern JunOS.
 Best would be to upgrade your RE-5 to 768 MB which is the max.

 The RE-5 only comes with 256MB sticks, so you would only need to buy 1
 more. This will be fine if you buy them from juniper ($$$).
 If you are going the 3rd party route then it'll be better to buy 3x256MB
 sticks since otherwise the chip types wont match which could lead to
 problems. The cost for these is probably just a few dollars...

 512MB sticks only work on the RE-5+ aka RE-850.

 As for the upgrade: Get yourself a install media (or create one) and
 save yourself the trouble of going via various intermediate versions
 (also this would be alot faster).

 -Jonas


 Am Mittwoch, den 28.09.2011, 21:43 +0300 schrieb Jake Jake:
  Hi all,
 
  I am looking at upgrading the JUNOS on our M10i router. Current JUNOS
  platform is 6.3R1.3 . The router has redundant routing Engine  RE-5.0
 with
  512MB DRAM . Also there is no compact flash on board only *ad1s1*. Can
 any
  one suggest on if I can upgrade the router to 11.1R5.4 with the current
  hardware specification .  Please advise on if a direct upgrade can be
 done
  as well from 6.3 to 11.1.
 
  Plus as I understand M10i has 3 DRAM slots. Is there any way of knowing
 the
  combination of RAM used ..i.e 256+256MB or a single 512MB RAM.
 
  Cheers
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i JUNOS Upgrade

2011-09-28 Thread Mark Tinka
On Thursday, September 29, 2011 03:06:42 AM Jeff Wheeler 
wrote:

 If you have DFZ routes you should upgrade the RAM to
 768MB, or alternatively, replace the router or buy more
 modern routing engines.

The new M7i/M10i RE-B-1800 should be dropping around Q1'12, 
along with Junos 11.4R3.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] [m10i] PIC-FPC throughput

2011-08-30 Thread sthaug
 Thanks, Peter, Jared, that's exactly what I needed to know. I have noticed
 the oversubscribed 4:1 words in IQ2 description, but could not found
 explicit statement of how much traffic can this PIC handle. Vendors do not
 like to admit such drawbacks in their products :)

I don't necessarily agree. In our conversations with Juniper, they have
been quite clear on the fact that the M7i/M10i IQ2 is 4:1 oversubscribed,
and has only 1 Gig of capacity towards the backplane.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [m10i] PIC-FPC throughput

2011-08-30 Thread Jonas Frey (Probe Networks)
The 3.2 Gbps limitation depends on the CFEB you have.

The CFEB-E bumps this up to full line rate on all ports (4 Gbps per
FPC).

M7i 8.4Gbps half-duplex CFEB / 10Gbps half-duplex CFEB-E
(this is because of the integrated GE/2FE Ports)

M10i 12.8Gbps half-duplex legacy CFEB, 3.2Gbps per FPC
 16Gbps half-duplex CFEB-E, 4Gbps per FPC


Anyway you always have only 1 Gbps per PIC towards the backplane
regardless of how many GE ports that PIC actually has.

Jonas

Am Dienstag, den 30.08.2011, 02:00 +0400 schrieb Nick Kritsky:
 Hi all,
 
 From the Juniper documentation I know that there is a throughput limitation
 of 3.2 Gbps per FPC on m10i routers. Does it mean that there is 800Mbps
 limitation on each PIC inserted in PIC slot on given FPC? Or is it an
 aggregate limitation. To give you the real life example - should I be
 worried if total usage on 4 interfaces of ge-0/0/* wants to go over 1G, if
 the total usage of ge-0/*/* is still below 2G. If that matters, the PIC in
 question is IQ2.
 
 any help is very good.
 thanks
 Nick Kritsky
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


signature.asc
Description: This is a digitally signed message part
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] [m10i] PIC-FPC throughput

2011-08-29 Thread Nick Kritsky
Hi all,

From the Juniper documentation I know that there is a throughput limitation
of 3.2 Gbps per FPC on m10i routers. Does it mean that there is 800Mbps
limitation on each PIC inserted in PIC slot on given FPC? Or is it an
aggregate limitation. To give you the real life example - should I be
worried if total usage on 4 interfaces of ge-0/0/* wants to go over 1G, if
the total usage of ge-0/*/* is still below 2G. If that matters, the PIC in
question is IQ2.

any help is very good.
thanks
Nick Kritsky
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [m10i] PIC-FPC throughput

2011-08-29 Thread Jared Mauch

On Aug 29, 2011, at 6:00 PM, Nick Kritsky wrote:

 Hi all,
 
 From the Juniper documentation I know that there is a throughput limitation
 of 3.2 Gbps per FPC on m10i routers. Does it mean that there is 800Mbps
 limitation on each PIC inserted in PIC slot on given FPC? Or is it an
 aggregate limitation. To give you the real life example - should I be
 worried if total usage on 4 interfaces of ge-0/0/* wants to go over 1G, if
 the total usage of ge-0/*/* is still below 2G. If that matters, the PIC in
 question is IQ2.

These limits are per-FPC.  The PIC is just the PHY to the fabric.

- jared


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [m10i] PIC-FPC throughput

2011-08-29 Thread Nick Kritsky
Thanks, Peter, Jared, that's exactly what I needed to know. I have noticed
the oversubscribed 4:1 words in IQ2 description, but could not found
explicit statement of how much traffic can this PIC handle. Vendors do not
like to admit such drawbacks in their products :)

best regards
Nick
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] m10i RE-5.0 Memory Upgrade

2011-07-13 Thread Mario Andres Rueda Jaimes
Hi All 


We are looking for some way to Upgrade the Memory of the RE-5.0 (512Mb
Memory Base) on M10i Router.

Anybody has perform this before ? is this possible ?



Thanks !!

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] m10i RE-5.0 Memory Upgrade

2011-07-13 Thread Andrew Hoyos
On Jul 13, 2011, at 9:07 AM, Mario Andres Rueda Jaimes wrote:

 We are looking for some way to Upgrade the Memory of the RE-5.0 (512Mb
 Memory Base) on M10i Router.
 
 Anybody has perform this before ? is this possible ?

Sure is, there are three DIMM slots on your RE-5.0. Assuming you probably have 
two 256MB sticks in there now.

Previous thread on this, related to 3rd party memory option too: 
http://puck.nether.net/pipermail/juniper-nsp/2005-February/003780.html


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] m10i RE-5.0 Memory Upgrade

2011-07-13 Thread MSusiva
Hi,

The RE 400 (CLI name RE.5) can be upgraded. It does supports upto 786B of
SDRAM.
Please check the following link:

Page#4 and table:32.

http://www.juniper.net/techpubs/software/nog/nog-hardware/download/routing-engines.pdf

-- 
Thanks,
Siva
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M10i, 10.0R3: IQ2E PIC not initialized correctly.

2010-04-23 Thread Alexandre Snarskii

Hi!

During upgrade of one of our M10i's to 10.0R3 IQ2E PIC refused
to start at boot time: 

chassisd[1277]: %DAEMON-3-CHASSISD_PIC_CMD_TIMEOUT: pic_ready_timer_expired: 
 attempt to bring PIC 0 in FPC 1 online timed out
cfeb CM: %PFE-6: Bouncing PIC 1/0 for reconfiguration 

Reboot did not helped (with the same logs) however after manual
issuing 'request chassis pic ... online' PIC came up. 

Other modules (IQ2-nonE and plain GE) came up correctly both times.

Anyone seen this (or like) behaviour ? Wandering what may be a
cause - some software bug, badly seated PIC, something else ? 

PS: log excerpts: 

initialization timout: 

18:47:17 (FPC Slot 1, PIC Slot 0) cosman_fpc_init: %PFE-6: FC based rewrite is 
OFF for fpc 
18:47:18 (FPC Slot 1, PIC Slot 0) PFEMAN: %PFE-6: server_addr 0x8001 
soft_restart 1 
18:47:18 (FPC Slot 1, PIC Slot 0) Version 10.0R3.10 by builder on 2010-04-16 
07:05:06 UTC 
18:48:18 chassisd[1277]: %DAEMON-3-CHASSISD_PIC_CMD_TIMEOUT: 
pic_ready_timer_expired: attempt to bring PIC 0 in FPC 1 online timed out
18:48:18 cfeb CM: %PFE-6: Bouncing PIC 1/0 for reconfiguration 
18:48:18 chassisd[1277]: %DAEMON-5-CHASSISD_IFDEV_DETACH_PIC: 
ifdev_detach_pic(1/0)

normal startup: 

18:48:51 (FPC Slot 1, PIC Slot 0) cosman_fpc_init: %PFE-6: FC based rewrite is 
OFF for fpc 
18:48:52 (FPC Slot 1, PIC Slot 0) PFEMAN: %PFE-6: server_addr 0x8001 
soft_restart 1 
18:48:52 (FPC Slot 1, PIC Slot 0) Version 10.0R3.10 by builder on 2010-04-16 
07:05:06 UTC 
18:48:52 (FPC Slot 1, PIC Slot 0) SNTP: %PFE-7: Daemon created 
18:48:53 (FPC Slot 1, PIC Slot 0) PFEMAN: %PFE-6: Session manager active 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] m10i Nastiness Friday night

2009-08-17 Thread Clue Store
Hi All,

Last friday we had some nastiness on one of our m10i's. As I am not a
Juniper expert, I was wondering if someone could decipher the log messages
and determine if is possibly a CFEB issue, or just a fluke Junos issue and
whether I should do anything or let it be and see if it does it again. I
have another m10i running 8.5, so I am thinking of just upgrading this box
to the same as my other, but i'd like to hear what some of you on the list
think.

TIA,
Clue

Hostname: JuniperM10i-HMNDLAMA
Model: m10i
JUNOS Base OS boot [8.0R2.8]
JUNOS Base OS Software Suite [8.0R2.8]
JUNOS Kernel Software Suite [8.0R2.8]
JUNOS Packet Forwarding Engine Support (M7i/M10i) [8.0R2.8]
JUNOS Routing Software Suite [8.0R2.8]
JUNOS Online Documentation [8.0R2.8]


Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 machine check caused by
error on the Processor Bus
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 PCI status register:
0x0020, error detect register 1: 0x00, 2: 0x08
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error ack count = 0
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error address: 0x0f3827f8
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 Processor bus error status
register: 0x52
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb transfer type 0b01010, transfer
size 2
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error detection reg2: ECC
multibit
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb ^B
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb last message repeated 6 times
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Context: Interrupt Level (0)
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Registers:
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R00: 0x0446 R01: 0x00799450
R02: 0x R03: 0x4f3827fc
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R04: 0x0552 R05: 0x
R06: 0x007994a0 R07: 0x0004
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R08: 0x0548 R09: 0x0017f48b
R10: 0x0002 R11: 0xb0c7d8ec
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R12: 0x28002044 R13: 0x02420020
R14: 0xf1ae2100 R15: 0x82600020
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R16: 0x442104c2 R17: 0x2248000b
R18: 0x0067 R19: 0x0067
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R20: 0x0067 R21: 0x006ce5a0
R22: 0x007902d0 R23: 0x0067
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R24: 0x0002 R25: 0x0004
R26: 0x0080bd40 R27: 0x
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R28: 0x0001 R29: 0x0001
R30: 0x4f38271c R31: 0x4f382714
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb MSR: 0x00089030 CTR: 0x0239
Link:0x002e34c8 SP:  0x00799450
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb CCR: 0x48002028 XER: 0x2000
PC:  0x00460320
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb DSISR: 0x DAR: 0x
K_MSR: 0x0030
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Stack Traceback:
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 01: sp = 0x00799450, pc =
0xc001
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 02: sp = 0x00799468, pc =
0x002e4d74
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 03: sp = 0x00799498, pc =
0x002e35e0
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 04: sp = 0x007994b8, pc =
0x002e3bb0
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 05: sp = 0x007994c0, pc =
0x00058818
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 06: sp = 0x007994d8, pc =
0x0003df34
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 07: sp = 0x00799500, pc =
0x003b4488
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 08: sp = 0x00799530, pc =
0x003b4660
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 09: sp = 0x00799548, pc =
0x003b3ed0
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 10: sp = 0x007995c8, pc =
0x003b3d30
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 11: sp = 0x007995e8, pc =
0x000b9f6c
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 12: sp = 0x00799610, pc =
0x000b8928
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 13: sp = 0x00799628, pc =
0x00448518
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 14: sp = 0x00799678, pc =
0x00442d00
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 15: sp = 0x00799698, pc =
0x0003a500
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 16: sp = 0x007996b0, pc =
0x0003b268
Aug 14 23:38:56  JuniperM10i-HMNDLAMA /kernel: rdp keepalive expired,
connection dropped - src 1:1021 dest 2:15360
Aug 14 23:38:56  JuniperM10i-HMNDLAMA craftd[2999]:  Major alarm set, CFEB
not online, the box is not forwarding
Aug 14 23:38:56  JuniperM10i-HMNDLAMA alarmd[2998]: Alarm set: CFEB
color=RED, class=CHASSIS, reason=CFEB not online, the box is not forwarding
Aug 14 23:38:56  JuniperM10i-HMNDLAMA craftd[2999]: forwarding display
request to chassisd: type = 4, subtype = 43
Aug 14 23:38:56  JuniperM10i-HMNDLAMA chassisd[2997]:
CHASSISD_SHUTDOWN_NOTICE: Shutdown reason: CFEB connection lost
Aug 14 23:38:56  JuniperM10i-HMNDLAMA chassisd[2997]:
CHASSISD_IFDEV_DETACH_FPC: ifdev_detach(0)
Aug 14 23:38:56  JuniperM10i-HMNDLAMA mib2d[3111]: SNMP_TRAP_LINK_DOWN:
ifIndex 77, 

Re: [j-nsp] m10i Nastiness Friday night

2009-08-17 Thread Nilesh Khambal
It looks like CFEB dumped core and restarted. Please open a JTAC case 
and let me them figure out what went wrong with CFEB. Please gather all 
logs around the time of the problem. Usually following logs should be a 
good start.


- show log messages[.(0-9).gz] (From RE)
- show syslog messages (from CFEB)
- show nvram (from CFEB).
- CFEB coredump file generated under /var/tmp
- Any other surrounding information such temperature, memory, CPU 
information about RE and CFEB around the time of the problem.


Given the old version of code you are running on the box, this may be a 
known issue fixed in later release such as 8.5 which you are running on 
the other box. Let JTAC analyze that.


Thanks,
Nilesh.

Clue Store wrote:

Hi All,

Last friday we had some nastiness on one of our m10i's. As I am not a
Juniper expert, I was wondering if someone could decipher the log messages
and determine if is possibly a CFEB issue, or just a fluke Junos issue and
whether I should do anything or let it be and see if it does it again. I
have another m10i running 8.5, so I am thinking of just upgrading this box
to the same as my other, but i'd like to hear what some of you on the list
think.

TIA,
Clue

Hostname: JuniperM10i-HMNDLAMA
Model: m10i
JUNOS Base OS boot [8.0R2.8]
JUNOS Base OS Software Suite [8.0R2.8]
JUNOS Kernel Software Suite [8.0R2.8]
JUNOS Packet Forwarding Engine Support (M7i/M10i) [8.0R2.8]
JUNOS Routing Software Suite [8.0R2.8]
JUNOS Online Documentation [8.0R2.8]


Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 machine check caused by
error on the Processor Bus
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 PCI status register:
0x0020, error detect register 1: 0x00, 2: 0x08
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error ack count = 0
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error address: 0x0f3827f8
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 Processor bus error status
register: 0x52
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb transfer type 0b01010, transfer
size 2
Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error detection reg2: ECC
multibit
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb ^B
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb last message repeated 6 times
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Context: Interrupt Level (0)
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Registers:
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R00: 0x0446 R01: 0x00799450
R02: 0x R03: 0x4f3827fc
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R04: 0x0552 R05: 0x
R06: 0x007994a0 R07: 0x0004
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R08: 0x0548 R09: 0x0017f48b
R10: 0x0002 R11: 0xb0c7d8ec
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R12: 0x28002044 R13: 0x02420020
R14: 0xf1ae2100 R15: 0x82600020
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R16: 0x442104c2 R17: 0x2248000b
R18: 0x0067 R19: 0x0067
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R20: 0x0067 R21: 0x006ce5a0
R22: 0x007902d0 R23: 0x0067
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R24: 0x0002 R25: 0x0004
R26: 0x0080bd40 R27: 0x
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R28: 0x0001 R29: 0x0001
R30: 0x4f38271c R31: 0x4f382714
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb MSR: 0x00089030 CTR: 0x0239
Link:0x002e34c8 SP:  0x00799450
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb CCR: 0x48002028 XER: 0x2000
PC:  0x00460320
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb DSISR: 0x DAR: 0x
K_MSR: 0x0030
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Stack Traceback:
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 01: sp = 0x00799450, pc =
0xc001
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 02: sp = 0x00799468, pc =
0x002e4d74
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 03: sp = 0x00799498, pc =
0x002e35e0
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 04: sp = 0x007994b8, pc =
0x002e3bb0
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 05: sp = 0x007994c0, pc =
0x00058818
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 06: sp = 0x007994d8, pc =
0x0003df34
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 07: sp = 0x00799500, pc =
0x003b4488
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 08: sp = 0x00799530, pc =
0x003b4660
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 09: sp = 0x00799548, pc =
0x003b3ed0
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 10: sp = 0x007995c8, pc =
0x003b3d30
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 11: sp = 0x007995e8, pc =
0x000b9f6c
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 12: sp = 0x00799610, pc =
0x000b8928
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 13: sp = 0x00799628, pc =
0x00448518
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 14: sp = 0x00799678, pc =
0x00442d00
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 15: sp = 0x00799698, pc =
0x0003a500
Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 16: sp = 0x007996b0, pc =
0x0003b268
Aug 14 23:38:56  JuniperM10i-HMNDLAMA /kernel: rdp 

Re: [j-nsp] m10i Nastiness Friday night

2009-08-17 Thread Dan Rautio
This message stands out:

 Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error detection reg2: ECC 
 multibit



 -Original Message-
 From: juniper-nsp-boun...@puck.nether.net [mailto:juniper-nsp-
 boun...@puck.nether.net] On Behalf Of Nilesh Khambal
 Sent: Monday, August 17, 2009 10:57 AM
 To: Clue Store
 Cc: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] m10i Nastiness Friday night
 
 It looks like CFEB dumped core and restarted. Please open a JTAC case
 and let me them figure out what went wrong with CFEB. Please gather all
 logs around the time of the problem. Usually following logs should be a
 good start.
 
 - show log messages[.(0-9).gz] (From RE)
 - show syslog messages (from CFEB)
 - show nvram (from CFEB).
 - CFEB coredump file generated under /var/tmp
 - Any other surrounding information such temperature, memory, CPU
 information about RE and CFEB around the time of the problem.
 
 Given the old version of code you are running on the box, this may be a
 known issue fixed in later release such as 8.5 which you are running on
 the other box. Let JTAC analyze that.
 
 Thanks,
 Nilesh.
 
 Clue Store wrote:
  Hi All,
 
  Last friday we had some nastiness on one of our m10i's. As I am not a
  Juniper expert, I was wondering if someone could decipher the log
 messages
  and determine if is possibly a CFEB issue, or just a fluke Junos issue
 and
  whether I should do anything or let it be and see if it does it again. I
  have another m10i running 8.5, so I am thinking of just upgrading this
 box
  to the same as my other, but i'd like to hear what some of you on the
 list
  think.
 
  TIA,
  Clue
 
  Hostname: JuniperM10i-HMNDLAMA
  Model: m10i
  JUNOS Base OS boot [8.0R2.8]
  JUNOS Base OS Software Suite [8.0R2.8]
  JUNOS Kernel Software Suite [8.0R2.8]
  JUNOS Packet Forwarding Engine Support (M7i/M10i) [8.0R2.8]
  JUNOS Routing Software Suite [8.0R2.8]
  JUNOS Online Documentation [8.0R2.8]
 
 
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 machine check caused
 by
  error on the Processor Bus
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 PCI status register:
  0x0020, error detect register 1: 0x00, 2: 0x08
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error ack count = 0
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error address:
 0x0f3827f8
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 Processor bus error
 status
  register: 0x52
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb transfer type 0b01010,
 transfer
  size 2
  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error detection reg2:
 ECC
  multibit
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb ^B
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb last message repeated 6 times
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Context: Interrupt Level (0)
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Registers:
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R00: 0x0446 R01:
 0x00799450
  R02: 0x R03: 0x4f3827fc
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R04: 0x0552 R05:
 0x
  R06: 0x007994a0 R07: 0x0004
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R08: 0x0548 R09:
 0x0017f48b
  R10: 0x0002 R11: 0xb0c7d8ec
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R12: 0x28002044 R13:
 0x02420020
  R14: 0xf1ae2100 R15: 0x82600020
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R16: 0x442104c2 R17:
 0x2248000b
  R18: 0x0067 R19: 0x0067
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R20: 0x0067 R21:
 0x006ce5a0
  R22: 0x007902d0 R23: 0x0067
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R24: 0x0002 R25:
 0x0004
  R26: 0x0080bd40 R27: 0x
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R28: 0x0001 R29:
 0x0001
  R30: 0x4f38271c R31: 0x4f382714
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb MSR: 0x00089030 CTR:
 0x0239
  Link:0x002e34c8 SP:  0x00799450
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb CCR: 0x48002028 XER:
 0x2000
  PC:  0x00460320
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb DSISR: 0x DAR:
 0x
  K_MSR: 0x0030
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Stack Traceback:
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 01: sp = 0x00799450, pc
 =
  0xc001
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 02: sp = 0x00799468, pc
 =
  0x002e4d74
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 03: sp = 0x00799498, pc
 =
  0x002e35e0
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 04: sp = 0x007994b8, pc
 =
  0x002e3bb0
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 05: sp = 0x007994c0, pc
 =
  0x00058818
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 06: sp = 0x007994d8, pc
 =
  0x0003df34
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 07: sp = 0x00799500, pc
 =
  0x003b4488
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 08: sp = 0x00799530, pc
 =
  0x003b4660
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 09: sp = 0x00799548, pc
 =
  0x003b3ed0
  Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 10: sp

Re: [j-nsp] m10i Nastiness Friday night

2009-08-17 Thread Clue Store
Thanks all for the replies. I'll get with JTAC and get or sorted out. As Dan
mentioned, the ECC multibit error kinda scares me as I do not wish to have
to drive 200+ miles and change out the memory. So lets hope for a Junos fix
:)

Thanks,
Clue

On Mon, Aug 17, 2009 at 12:19 PM, Dan Rautio drau...@juniper.net wrote:

 This message stands out:

  Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error detection reg2:
 ECC multibit



   -Original Message-
  From: juniper-nsp-boun...@puck.nether.net [mailto:juniper-nsp-
  boun...@puck.nether.net] On Behalf Of Nilesh Khambal
  Sent: Monday, August 17, 2009 10:57 AM
  To: Clue Store
  Cc: juniper-nsp@puck.nether.net
  Subject: Re: [j-nsp] m10i Nastiness Friday night
 
  It looks like CFEB dumped core and restarted. Please open a JTAC case
  and let me them figure out what went wrong with CFEB. Please gather all
  logs around the time of the problem. Usually following logs should be a
  good start.
 
  - show log messages[.(0-9).gz] (From RE)
  - show syslog messages (from CFEB)
  - show nvram (from CFEB).
  - CFEB coredump file generated under /var/tmp
  - Any other surrounding information such temperature, memory, CPU
  information about RE and CFEB around the time of the problem.
 
  Given the old version of code you are running on the box, this may be a
  known issue fixed in later release such as 8.5 which you are running on
  the other box. Let JTAC analyze that.
 
  Thanks,
  Nilesh.
 
  Clue Store wrote:
   Hi All,
  
   Last friday we had some nastiness on one of our m10i's. As I am not a
   Juniper expert, I was wondering if someone could decipher the log
  messages
   and determine if is possibly a CFEB issue, or just a fluke Junos issue
  and
   whether I should do anything or let it be and see if it does it again.
 I
   have another m10i running 8.5, so I am thinking of just upgrading this
  box
   to the same as my other, but i'd like to hear what some of you on the
  list
   think.
  
   TIA,
   Clue
  
   Hostname: JuniperM10i-HMNDLAMA
   Model: m10i
   JUNOS Base OS boot [8.0R2.8]
   JUNOS Base OS Software Suite [8.0R2.8]
   JUNOS Kernel Software Suite [8.0R2.8]
   JUNOS Packet Forwarding Engine Support (M7i/M10i) [8.0R2.8]
   JUNOS Routing Software Suite [8.0R2.8]
   JUNOS Online Documentation [8.0R2.8]
  
  
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 machine check caused
  by
   error on the Processor Bus
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 PCI status register:
   0x0020, error detect register 1: 0x00, 2: 0x08
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error ack count = 0
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error address:
  0x0f3827f8
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 Processor bus error
  status
   register: 0x52
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb transfer type 0b01010,
  transfer
   size 2
   Aug 14 23:38:51  JuniperM10i-HMNDLAMA cfeb mpc106 error detection reg2:
  ECC
   multibit
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb ^B
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb last message repeated 6
 times
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Context: Interrupt Level (0)
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Registers:
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R00: 0x0446 R01:
  0x00799450
   R02: 0x R03: 0x4f3827fc
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R04: 0x0552 R05:
  0x
   R06: 0x007994a0 R07: 0x0004
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R08: 0x0548 R09:
  0x0017f48b
   R10: 0x0002 R11: 0xb0c7d8ec
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R12: 0x28002044 R13:
  0x02420020
   R14: 0xf1ae2100 R15: 0x82600020
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R16: 0x442104c2 R17:
  0x2248000b
   R18: 0x0067 R19: 0x0067
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R20: 0x0067 R21:
  0x006ce5a0
   R22: 0x007902d0 R23: 0x0067
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R24: 0x0002 R25:
  0x0004
   R26: 0x0080bd40 R27: 0x
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb R28: 0x0001 R29:
  0x0001
   R30: 0x4f38271c R31: 0x4f382714
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb MSR: 0x00089030 CTR:
  0x0239
   Link:0x002e34c8 SP:  0x00799450
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb CCR: 0x48002028 XER:
  0x2000
   PC:  0x00460320
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb DSISR: 0x DAR:
  0x
   K_MSR: 0x0030
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Stack Traceback:
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 01: sp = 0x00799450,
 pc
  =
   0xc001
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 02: sp = 0x00799468,
 pc
  =
   0x002e4d74
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 03: sp = 0x00799498,
 pc
  =
   0x002e35e0
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 04: sp = 0x007994b8,
 pc
  =
   0x002e3bb0
   Aug 14 23:38:52  JuniperM10i-HMNDLAMA cfeb Frame 05: sp

Re: [j-nsp] M10i router

2009-05-21 Thread Mark Tinka
On Monday 18 May 2009 11:26:07 pm sth...@nethelp.no wrote:

 Correct. Both M10i and M20 can handle STM-16...

But only with the non-enhanced CFEB.

The new enhanced CFEB doesn't support the STM-16 PIC.

Cheers,

Mark.



signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] M10i router

2009-05-18 Thread simon teh
Hi all,

I have some question about M10i router. After reading some old thread in
this mailing list archive, I noticed that we can re-use M20 PIC on M10i.
(please correct me if I am wrong)

However my concern is can we use 10 Gi PIC on M10i or M20 router? I have
check the Table of PICs Supported in the M10i  M20 from www.juniper.net, it
is not listed.
Does that mean it is not supported?

Thanks

Best regards,
Simon
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i router

2009-05-18 Thread sthaug
 However my concern is can we use 10 Gi PIC on M10i or M20 router? I have
 check the Table of PICs Supported in the M10i  M20 from www.juniper.net, it
 is not listed.
 Does that mean it is not supported?

Correct. Both M10i and M20 can handle STM-16 (in the form of a full width
FPC with integrated PIC), but not 10Gig.

If you need 10Gig Ethernet, you might want to look at the MX series. If
you need SDH STM-64 interface you need to look at M120, M320 and T series.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i router

2009-05-18 Thread Withers, Ronald H.
I don't think you can reuse M20 pics on an M10i without changing the
bottom plate.  I think it's vice versa you can use M10i pic on an M20.
The M10i pics have a different bottom plate that allows for the catch
on the card.  The M20 cards do not have this thus prohibiting them from
going in an M10i.


--
Ron Withers
IP Engineer

-Original Message-
From: juniper-nsp-boun...@puck.nether.net
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of simon teh
Sent: Monday, May 18, 2009 10:55 AM
To: juniper-nsp@puck.nether.net
Subject: [j-nsp] M10i router

Hi all,

I have some question about M10i router. After reading some old thread in
this mailing list archive, I noticed that we can re-use M20 PIC on M10i.
(please correct me if I am wrong)

However my concern is can we use 10 Gi PIC on M10i or M20 router? I have
check the Table of PICs Supported in the M10i  M20 from
www.juniper.net, it
is not listed.
Does that mean it is not supported?

Thanks

Best regards,
Simon
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i router

2009-05-18 Thread Brandon Bennett
An M20 pic will go into a M10i just fine, but as Ron said the bottom plate
if different.   This means unsupported.

To get the M20 PIC in a M10i you will need to pull out the adjacent pic and
have someone with a flash light shining in the slot to line up the pins
properly as not to bend them.   To remove the PIC you will have to remove
the adjacent PIC and pull from behind (there is no ejection mechanism)

HTH,

Brandon Bennett
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i router [solved]

2009-05-18 Thread simon teh
Hi all,

Noted. Thanks for the help from all members.
I appreciate it very much.

Best regards,
Simon


On Mon, May 18, 2009 at 11:26 PM, sth...@nethelp.no wrote:

  However my concern is can we use 10 Gi PIC on M10i or M20 router? I have
  check the Table of PICs Supported in the M10i  M20 from www.juniper.net,
 it
  is not listed.
  Does that mean it is not supported?

 Correct. Both M10i and M20 can handle STM-16 (in the form of a full width
 FPC with integrated PIC), but not 10Gig.

 If you need 10Gig Ethernet, you might want to look at the MX series. If
 you need SDH STM-64 interface you need to look at M120, M320 and T series.

 Steinar Haug, Nethelp consulting, sth...@nethelp.no

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i RE Upgrade

2009-04-01 Thread alain.briant
Hi Gareth

BB is Base Bundle (the card is comming in the chassis)
R is Redundancy (that you buy if you want redundancy and so that is optionnal)
S is Spare (That you by for replacement and so that is not normaly installed in 
a router)

You'de rather change of supplier ;-)

Kind regards
Alain


-Message d'origine-
De : juniper-nsp-boun...@puck.nether.net 
[mailto:juniper-nsp-boun...@puck.nether.net] De la part de Gareth Campling
Envoyé : mercredi 1 avril 2009 09:52
À : juniper-nsp@puck.nether.net
Objet : [j-nsp] M10i RE Upgrade

Hi

 

I am looking at upgrading 2 of our M10i's RE's to the RE-850-1536 but a bit 
confused by our suppliers price list.

 

Can anyone tell me the difference between these.. except the price ?

 

RE-850-1536-BB 

RE-850-1536-R

RE-850-1536-S

 

Our suppliers does not know...

 

Thanks in advance.

..
Gareth 

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i RE Upgrade

2009-04-01 Thread Mark Tinka
On Wednesday 01 April 2009 03:51:37 pm Gareth Campling 
wrote:

Just to add onto what Dan has mentioned:

 RE-850-1536-BB

This ships with the chassis - you need at least one to 
operate the thing :-). But it's not necessarily free, so I 
find the term Base Bundle to be a bit of a misnomer.

The M7i and M10i still ship with the RE-400 (768MB DRAM) as 
part of the package (true base bundle?). Replacing that with 
an RE-850 will cost you, and you won't get the RE-400, but 
who cares.

What's interesting is that the -BB and -R versions of the 
RE-850 have a different price each, and a huge difference at 
that. I've always wondered whether ordering 2x -BB instead 
of 1x -BB and 1x -R will yield a better quotation :-)?

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] M10i RE Upgrade

2009-04-01 Thread Powers, Kenny

Your Juniper partner does not sound up to speed. If you are not buying an M10i, 
and just the engine right now, the only part you can buy is the 
RE-850-1536-WW-S (the WW stands for JUNOS Worldwide).  They do not sell the 
RE-850-1536-S anymore.  The -BB is only available when you buy an M10i with it 
and the -R is their code for when you buy an M10i with 2 in it.  List is $20k 
on the RE-850-1536-WW-S.

Kenny


Kenny Powers
Direct: 678-969-3396  Fax: 678-969-3397  Mobile: 678-591-3022
* Enterprise Storage, Servers, Networking Equipment 
* Data Center Consolidations / Relocations  
* Asset Remarketing / Disposition Services 



-Original Message-
From: juniper-nsp-boun...@puck.nether.net 
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Gareth Campling
Sent: Wednesday, April 01, 2009 3:52 AM
To: juniper-nsp@puck.nether.net
Subject: [j-nsp] M10i RE Upgrade

Hi

 

I am looking at upgrading 2 of our M10i's RE's to the RE-850-1536 but a
bit confused by our suppliers price list.

 

Can anyone tell me the difference between these.. except the price ?

 

RE-850-1536-BB 

RE-850-1536-R

RE-850-1536-S

 

Our suppliers does not know...

 

Thanks in advance.

..
Gareth 

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Except for those software products specifically listed by Canvas on a sales 
quote, Customer acknowledges and agrees that Canvas does not provide any 
operating system software or software right-to-use licenses with the equipment 
it sells. Customer is responsible for registering any software it uses or 
obtains with the applicable licensor and for complying with all software 
licensing policies of such licensor.   

The information contained in this message and any attachments is confidential 
and proprietary.  It is intended only for the named recipient(s).  If you 
received this message in error, please notify us immediately and be aware that 
any disclosure, copying, distribution, or use of the contents of this 
information is strictly prohibited.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-02-05 Thread listensamm...@gmx.de

Pajlatek schrieb:

To monitor your usage of SRAM use this command:
#request pfe execute command show jtree 0 memory target cfeb0 (or cfeb1)
(..)
GOT: Memory Statistics:
GOT: 8388608 bytes total (2 banks)
GOT: 5006848 bytes used



Many thanks fur this hint...
Unfortunately the command doesn't work with my JunOS Release.
What JunOS are you using ?

Whith my JunOS 7.5R2.8, i had to login to cfeb0 before i can execute the 
show jtree 0 memory.

Do you know a snmp oid for this ? Then i could setup a monitoring job.

r...@myrouter% vty ssb

CSBR platform (266Mhz PPC 603e processor, 128MB memory, 512KB flash)

CSBR0(MYROUTER vty)# sh jtree 0 memory
Memory Statistics:
8388608 bytes total (2 banks)
4226168 bytes used
4162440 bytes free
   8128 pages total
   4109 pages used
   4019 pages free
 31 max freelist size

Free Blocks:
 Size(b)Total(b)Free   TFree   Alloc
  --  --  --  --
   8 29420401114   0  366641
  16 1032144 344   0   64165
  24 168   1   0   6
  32   0   0   0   0
  40   0   0   0   0
  48   0   0   0   0
  56   0   0   0   0
  64   0   0   0   0
  72   0   0   0   0
  80   0   0   0   0
  88   0   0   0   0
  96   0   0   0   0
 104   0   0   0   0
   Total 3974352

Context: 0x879d9c

Regards,
Alex

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-02-02 Thread Mark Tinka
On Monday 02 February 2009 08:44:23 am Pajlatek wrote:

 This is only 8MB on M7i/M10i, and there is no upgrade.

Not unless you're willing to part with some $$ for the new 
FEB-M10i-M7i-E (enhanced M7i/M10i CFEB with 32MB of RLDRAM).

Still waiting for a price and ship date from our local 
Juniper team.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-25 Thread Pekka Savola

On Sat, 24 Jan 2009, Nilesh Khambal wrote:

I doubt that its a memory leak unless some new feature that could cause
memory leak (due to a bug) or new configuration was added recently that
could suddenly increase the number of routes on the router.


Another candidate is enabling urpf.  It consumes this memory linear to 
the fib size (regardless of how it's used and on which interfaces).


--
Pekka Savola You each name yourselves king, yet the
Netcore Oykingdom bleeds.
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-24 Thread listensamm...@gmx.de

Bjørn Tore Paulen schrieb:

Might there be some issue with permissions here? If you login as root or
similar you should be able to start shell.
  
Thanks for your replies. I fixed the problem by rebooting the router 
yesterday.

After that, i could login with start shell pfe network cfeb0 again.
The M10i was running about 2,5 years without problems. If the problem 
occurs again in the near future i will consider RAM-Upgrade on CFEB.


Regards,
Alex
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-24 Thread Derick Winkworth
Could be a memory leak...



listensamm...@gmx.de wrote:
 Bjørn Tore Paulen schrieb:
 Might there be some issue with permissions here? If you login as root or
 similar you should be able to start shell.
   
 Thanks for your replies. I fixed the problem by rebooting the router
 yesterday.
 After that, i could login with start shell pfe network cfeb0 again.
 The M10i was running about 2,5 years without problems. If the problem
 occurs again in the near future i will consider RAM-Upgrade on CFEB.

 Regards,
 Alex
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


 No virus found in this incoming message.
 Checked by AVG - http://www.avg.com 
 Version: 8.0.176 / Virus Database: 270.10.13/1912 - Release Date: 1/23/2009 
 6:54 PM

   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-24 Thread Nilesh Khambal
Hi Derick,

I doubt that its a memory leak unless some new feature that could cause
memory leak (due to a bug) or new configuration was added recently that
could suddenly increase the number of routes on the router. It also can not
be a memory leak if the router was running for 2.5 yrs without any reboot.
Sometimes, this problem may happen on routers that are running for years
without any reboot. DRAM on cfeb may get too much fragmented (just like a PC
RAM). When any route operation such as ADD is done by the router, it may
fail under such conditions if the router can not allocate contiguous block
of memory (unfragmented) large enough to hold the routing data from the
operation being performed. This can be completely normal and a simple reboot
will fix this issue. This is needed since router does not have a defrag
function like a PC :). If you think its not the reason, please work with
JTAC to identify the root cause of the problem in future.

Thanks,
Nilesh.   


On 1/24/09 8:34 PM, Derick Winkworth dwinkwo...@att.net wrote:

 Could be a memory leak...
 
 
 
 listensamm...@gmx.de wrote:
  Bjørn Tore Paulen schrieb:
  Might there be some issue with permissions here? If you login as root or
  similar you should be able to start shell.
   
  Thanks for your replies. I fixed the problem by rebooting the router
  yesterday.
  After that, i could login with start shell pfe network cfeb0 again.
  The M10i was running about 2,5 years without problems. If the problem
  occurs again in the near future i will consider RAM-Upgrade on CFEB.
 
  Regards,
  Alex
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 
 
  No virus found in this incoming message.
  Checked by AVG - http://www.avg.com
  Version: 8.0.176 / Virus Database: 270.10.13/1912 - Release Date: 1/23/2009
 6:54 PM
 
   
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-19 Thread listensamm...@gmx.de

Hi List,

i have a problem on one of our M10i.
System log continously shows following errors:

Jan 19 15:59:03 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
CHANGE) failed, err 5 (Invalid)
Jan 19 15:59:03 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
ADD) failed, err 6 (No Memory)
Jan 19 15:59:07 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
CHANGE) failed, err 6 (No Memory)
Jan 19 15:59:15 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
ADD) failed, err 6 (No Memory)
Jan 19 15:59:23 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
CHANGE) failed, err 5 (Invalid)
Jan 19 15:59:27 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
ADD) failed, err 6 (No Memory)
Jan 19 15:59:27 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
CHANGE) failed, err 5 (Invalid)
Jan 19 15:59:30 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
ADD) failed, err 6 (No Memory)


But there is no traffic impact
I tried further investigation with some commands i found in older mailings.
But without success...

u...@myrouter start shell pfe network cfeb0
vty: connect: Connection refused

u...@myrouter start shell   
% su -

Password:
r...@myrouter% vty ssb
vty: connect: Connection refused
r...@myrouter%

I could imagine that a reload will clear the problem, but i would like 
to avoid it...

Did somebody know this issue ?

Thanks
Regards,
alex


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-19 Thread sthaug
 i have a problem on one of our M10i.
 System log continously shows following errors:
 
 Jan 19 15:59:03 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
 CHANGE) failed, err 5 (Invalid)
 Jan 19 15:59:03 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
 ADD) failed, err 6 (No Memory)
 Jan 19 15:59:07 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
 CHANGE) failed, err 6 (No Memory)
 Jan 19 15:59:15 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
 ADD) failed, err 6 (No Memory)
 Jan 19 15:59:23 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
 CHANGE) failed, err 5 (Invalid)
 Jan 19 15:59:27 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
 ADD) failed, err 6 (No Memory)
 Jan 19 15:59:27 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX 
 CHANGE) failed, err 5 (Invalid)
 Jan 19 15:59:30 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX 
 ADD) failed, err 6 (No Memory)
 
 But there is no traffic impact
 I tried further investigation with some commands i found in older mailings.
 But without success...
 
 u...@myrouter start shell pfe network cfeb0
 vty: connect: Connection refused

This probably doesn't work precisely because there is insufficient CFEB
memory. We have seen exactly this happen. Sooner or later it probably
*will* impact traffic.

 I could imagine that a reload will clear the problem, but i would like 
 to avoid it...

A reload is probably the only fix. And you *really* want to monitor your
CFEB memory utilization (show chassis cfeb).

We are in the process of upgrading all our M7i/M10i CFEBs to 256 MB.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-19 Thread Richmond, Jeff
Agreed, we had this happen on older M20s as well. Depending on your routing 
table, a quick bandaid fix might be to just clean up more specific prefixes if 
you can so that your forwarding table doesn't have as many entries. For 
example, I have had cases where we have a bunch of smaller internal routes in 
RR clients that really didn't need to be there as long as the RR server routers 
had the more specifics. More memory is certainly the long term solution though.

Regards,
-Jeff

From: juniper-nsp-boun...@puck.nether.net [juniper-nsp-boun...@puck.nether.net] 
On Behalf Of sth...@nethelp.no [sth...@nethelp.no]
Sent: Monday, January 19, 2009 10:35 AM
To: listensamm...@gmx.de
Cc: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, 
err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

 i have a problem on one of our M10i.
 System log continously shows following errors:

 Jan 19 15:59:03 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX
 CHANGE) failed, err 5 (Invalid)
 Jan 19 15:59:03 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX
 ADD) failed, err 6 (No Memory)
 Jan 19 15:59:07 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX
 CHANGE) failed, err 6 (No Memory)
 Jan 19 15:59:15 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX
 ADD) failed, err 6 (No Memory)
 Jan 19 15:59:23 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX
 CHANGE) failed, err 5 (Invalid)
 Jan 19 15:59:27 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX
 ADD) failed, err 6 (No Memory)
 Jan 19 15:59:27 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 3 (PREFIX
 CHANGE) failed, err 5 (Invalid)
 Jan 19 15:59:30 MYROUTER /kernel: %KERN-1-RT_PFE: RT msg op 1 (PREFIX
 ADD) failed, err 6 (No Memory)

 But there is no traffic impact
 I tried further investigation with some commands i found in older mailings.
 But without success...

 u...@myrouter start shell pfe network cfeb0
 vty: connect: Connection refused

This probably doesn't work precisely because there is insufficient CFEB
memory. We have seen exactly this happen. Sooner or later it probably
*will* impact traffic.

 I could imagine that a reload will clear the problem, but i would like
 to avoid it...

A reload is probably the only fix. And you *really* want to monitor your
CFEB memory utilization (show chassis cfeb).

We are in the process of upgrading all our M7i/M10i CFEBs to 256 MB.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i - %KERN-1-RT_PFE: RT msg op 1 (PREFIX ADD) failed, err 6 (No Memory) / RT msg op 3 (PREFIX CHANGE) failed, err 6 (No Memory)

2009-01-19 Thread listensamm...@gmx.de


did you try 'start shell pfe network cfeb' (without zero), or similar 
from shell ?

Hi,,

i have to add the 0, because we have an M10i, which can manage 2 CFEBs:

u...@myrouter start shell pfe network cfeb
 ^
'cfeb' is ambiguous.
Possible completions:
 cfeb0Connect to Compact Forwarding Engine Board 0
 cfeb1Connect to Compact Forwarding Engine Board 1

u...@myrouter start shell pfe network cfeb0 
vty: connect: Connection refused


u...@myrouter start shell pfe network cfeb1   
vty: connect: No route to host


u...@myrouter

Connection to vty doesn't work at all.
I have to check it via direct console access.

Regards,
Alex Detzen

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i Junos 8.0

2008-10-14 Thread Felix Schueren
Eric Van Tol wrote:
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:juniper-nsp-
 [EMAIL PROTECTED] On Behalf Of W. Kevin Hunt
 Sent: Monday, October 13, 2008 3:37 PM
 To: juniper-nsp@puck.nether.net
 Subject: [j-nsp] M10i Junos 8.0

 Is there a default rate limit on packets destined to the RE ?
 I've got terribly sluggish CLI on one of my boxes, but nothing jumps out
 as the possible cause.
 No ddos against the router's interfaces, netflow sampling has been
 turned off, etc...
 Load and cpu usage are both very low as checked by snmp and the CLI.

 WKH
 
 It's a long shot, but is there a chance that your logs are showing something 
 like the following?
 
 /kernel: chassisd pid 2922 syscall 54 ran for 1251.115 ms
 
 We had this problem in 8.4, I believe.  The symptom was that the CLI was very 
 sluggish every time this entry was logged.
 
we've had trouble with an m10i due to memory problems - the default m10i
came with 256MB RAM, and the box was very sluggish due to swapping,
which also logged entries like RPD_SCHEDULER_SLIP etc.

-felix




-- 
Felix Schueren, Head of NOC

Host Europe GmbH - http://www.hosteurope.de
Welserstraße 14 - D-51149 Köln - Germany
Telefon: (0800) 4 67 83 87 - Telefax: (01805) 66 32 33
HRB 28495 Amtsgericht Köln - UST ID DE187370678
Geschäftsführer: Uwe Braun - Patrick Pulvermüller - Stewart Porter
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] M10i Junos 8.0

2008-10-13 Thread W. Kevin Hunt

Is there a default rate limit on packets destined to the RE ?
I've got terribly sluggish CLI on one of my boxes, but nothing jumps out 
as the possible cause.
No ddos against the router's interfaces, netflow sampling has been 
turned off, etc...

Load and cpu usage are both very low as checked by snmp and the CLI.

WKH

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M10i performance

2008-08-01 Thread Sven Juergensen (KielNET)

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dear list,

doing some mindgames with deploying
additional BGP routers that need to
sport the following features:
- - full BGP table
- - 6 SFP GE wirespeed slots (no over-
~  subscription)

That's the basic idea. Is a fully re-
dundant setup of a M10i using
- - RE-850-1536-R
able to do this or is it like walking
on the edge when it comes to the BGP-
capacity?

Also, will
(2) PE-4GE-TYPE1-SFP-IQ2
feature full duplex wirespeed ports or
are they oversubscribed? Considering
the specsheet, the M10i is able to
deliver 12.8 Gbps - are there any back-
plane considerations or is this a shared
bandwidth between all eight slots?

Is using
(6) PE-1GE-SFP
an alternative that actually provides
every port with wirespeed?

Thanks for any clues and best regards,

Mit freundlichen Gruessen

i. A. Sven Juergensen

- --
Fachbereich
Informationstechnologie

KielNET GmbH
Gesellschaft fuer Kommunikation
Preusserstr. 1-9, 24105 Kiel

Telefon : 0431 / 2219-053
Telefax : 0431 / 2219-005
E-Mail  : [EMAIL PROTECTED]
Internet: http://www.kielnet.de

Geschaeftsfuehrer Eberhard Schmidt
HRB 4499 (Amtsgericht Kiel)
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (GNU/Linux)

iEYEARECAAYFAkiSqP8ACgkQnEU7erAt4TJLkgCdH0sm8Ifvv9w8cQmqbsCuUaqA
U50AoNOeH6DwTuMB7iQaY4XWuajHs4lf
=Te8l
-END PGP SIGNATURE-
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i performance

2008-08-01 Thread sthaug
 doing some mindgames with deploying
 additional BGP routers that need to
 sport the following features:
 - - full BGP table
 - - 6 SFP GE wirespeed slots (no over-
 ~  subscription)
 
 That's the basic idea. Is a fully re-
 dundant setup of a M10i using
 - - RE-850-1536-R
 able to do this or is it like walking
 on the edge when it comes to the BGP-
 capacity?

You should be just fine with RE-850-1536-R. The M10i has 6.4 Gbps full
duplex forwarding capacity, so six GigE ports should be okay.

 Also, will
 (2) PE-4GE-TYPE1-SFP-IQ2
 feature full duplex wirespeed ports or
 are they oversubscribed? Considering
 the specsheet, the M10i is able to
 deliver 12.8 Gbps - are there any back-
 plane considerations or is this a shared
 bandwidth between all eight slots?

PE-4GE-TYPE1-SFP-IQ2 is overbooked. It has *one* full duplex GigE link
to the backplane. For six wirespeed ports you will need PE-1GE-SFP,
distributed over the two CFEBs.

 Is using
 (6) PE-1GE-SFP
 an alternative that actually provides
 every port with wirespeed?

Yes. Note that list price for 6 x PE-1GE-SFP is higher than *one* 20
port DPCE-R-Q-20GE-SFP GigE card for the MX series. I would strongly
urge you to consider the MX series here.

Steinar Haug, Nethelp consulting, [EMAIL PROTECTED]
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i performance

2008-08-01 Thread sthaug
 Yes. Note that list price for 6 x PE-1GE-SFP is higher than *one* 20
 port DPCE-R-Q-20GE-SFP GigE card for the MX series. I would strongly
 urge you to consider the MX series here.

Looking at the numbers a bit more closely: An MX240 with everything
redundant except the DPCE-R-Q-20GE-SFP card is only about 15.5% more
expensive than the redundant M10i - and has much higher capacity.

On the other hand, if you also need a redundant DPC, the MX240 is
going to be significantly more expensive.

Steinar Haug, Nethelp consulting, [EMAIL PROTECTED]
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i can't commit configuration

2007-08-06 Thread Eric Van Tol
Nevermind - problem resolved.  Seems to be a possible bug or simply an
undocumented syntax change between 8.2R1.x and 8.2R2.x.

Thanks to those who have responded so far.

-evt

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Eric Van Tol
 Sent: Monday, August 06, 2007 10:04 AM
 To: juniper-nsp@puck.nether.net
 Subject: [j-nsp] M10i can't commit configuration
 
 Hi all,
 I'm getting the following error when trying to commit a 
 configuration on
 a newly installed M10i:
 
 cer1.bltmmdch-re0# commit and-quit
 re0:
 error: Check-out failed for Routing protocols process (/usr/sbin/rpd)
 without details
 error: configuration check-out failed
 
 We've restarted RPD and tried a 'commit full', but nothing works.  I'd
 prefer not to have to restart the entire router, but will if 
 necessary.
 Any idea what this is all about?
 
 Thanks,
 evt
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp