Re: [j-nsp] SRX SNMP trending, current sessions & connection rate?

2010-08-29 Thread Ben Dale
Hi Matthew,

On 30/08/2010, at 2:04 PM, matthew zeier wrote:

> Having trouble finding the OIDs to trend concurrent sessions and new session 
> setup rate (which I suppose could be pulled from the same OID).
> 
> jtac pointed me at 
> 
> http://www.juniper.net/techpubs/en_US/junos10.1/information-products/topic-collections/config-guide-network-mgm/mib-jnx-js-policy.txt
>  
> 
> but I must be glossing over it.  
> 
> Looking for an OID or a pre-canned cacti template.

The MIB you have been provided will only show the session count for a specific 
policy at a time, so probably isn't what you're after. 

If you run "show snmp mib walk jnxJsSPUMonitoringObjectsTable" on your SRX, 
you'll see the correct object you need to monitor for overall session count is 
jnxJsSPUMonitoringCurrentFlowSession.0

Cheers,

Ben
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX SNMP trending, current sessions & connection rate?

2010-08-29 Thread Tim Eberhard
Co-current sessions are 1.3.6.1.4.1.2636.3.39.1.12.1.2.0

As far as I know there is no OID for session set up rate or ramp rate.

Hope this helps,
-Tim Eberhard

On Sun, Aug 29, 2010 at 11:04 PM, matthew zeier  wrote:

> Having trouble finding the OIDs to trend concurrent sessions and new
> session setup rate (which I suppose could be pulled from the same OID).
>
> jtac pointed me at
>
>
> http://www.juniper.net/techpubs/en_US/junos10.1/information-products/topic-collections/config-guide-network-mgm/mib-jnx-js-policy.txt
>
> but I must be glossing over it.
>
> Looking for an OID or a pre-canned cacti template.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] SRX SNMP trending, current sessions & connection rate?

2010-08-29 Thread matthew zeier
Having trouble finding the OIDs to trend concurrent sessions and new session 
setup rate (which I suppose could be pulled from the same OID).

jtac pointed me at 

http://www.juniper.net/techpubs/en_US/junos10.1/information-products/topic-collections/config-guide-network-mgm/mib-jnx-js-policy.txt
 

but I must be glossing over it.  

Looking for an OID or a pre-canned cacti template.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] radius authentication

2010-08-29 Thread Patrick Okui

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 30 Aug, 2010, at 1:27 AM, snort bsd wrote:

so I have an user named "admin" on juniper routers. then all of  
other users who
registered in radius server must be mapped to this local user of  
"admin"?


In short, if you do not map the user to any local user ("admin" in  
your case) JUNOS assumes you've mapped to the user "remote".


does this local user "admin" have to be registered with radius  
server too?


Not really.

is the following configuration (file "users") for radius server good  
enough?


test   Auth-Type := Local
   Cleartext-Password := "1234567890",
   Juniper-Local-User-Name = "admin"


This looks fine. Have you tried it?

- --
patrick
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)

iEYEARECAAYFAkx7Ha8ACgkQ+wrazll+97xW6QCghXKKD604T14L6hki6+rE1tdb
pAcAnjfDn8tWJzfzFCgfhfTsDyOcIO8i
=GToo
-END PGP SIGNATURE-

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Richard A Steenbergen
On Sun, Aug 29, 2010 at 12:00:01PM -0700, Derick Winkworth wrote:
> so the possibility does exist that with a combination of newer fabric 
> and newer line card (a line card with better MQ memory bandwidth), 
> that MX might be able to push more traffic per slot...

Sure, the chassis backplane is electrically capable of quite a bit, so 
if you keep upgrading the fabric and the cards you should be able to 
keep increasing bandwidth without forklifting the chassis for a long 
time.

BTW, one more point of clarification on the MQ bandwidth limit. The 70G 
limit is actually the for bandwidth crossing the PFE in any direction, 
so the previously mentioned "35G fabric 35G wan" example is actually 
based on the assumption of bidirectional traffic. To calculate the MQ 
usage you only want to count the packet ONCE per PFE crossing (so for 
example, you can just count every ingress packet), but you need to 
include traffic coming from the fabric interfaces too.

So for example:

* A single 10Gbps stream coming in one port on the PFE counts as 10Gbps 
of MQ usage, regardless of whether it is destined for the fabric or for 
a local port.

* A bidirectional 10Gbps stream between two ports on the same PFE counts 
as 20Gbps of MQ usage, since you have 2 ports each receiving 10Gbps.

* 30Gbps of traffic coming in over the fabric (ignoring any fabric 
limitations for the moment, as they are a separate calculation) and 
going out the WAN interfaces counts as 30G of MQ usage, which means you 
still have another 40G available to receive packets from the WAN 
interfaces and locally or fabric switch them.

This is quite a bit more flexible than just thinking about it as "35G 
full duplex", since you're free to use the 70G in any direction you see 
fit.

-- 
Richard A Steenbergenhttp://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] radius authentication

2010-08-29 Thread snort bsd
Hi, all:

I am trying to understand the radius authentication process supported by 
Juniper 
routers. on JNCIS book: 


[quote]A user supplies a name of Scott to the remote authentication server, 
which accepts the request. However, Scott is not a current username in the 
local 
password database. In this situation, the router maps Scott to the default 
username of "remote".[/quote]

[quote]In short, the remote server may authenticate a user with the name of 
Sally but inform the router that Sally should be mapped to the local name of 
Beth for purposes of assigning rights and privileges on the router.[/quote]

so I have an user named "admin" on juniper routers. then all of other users who 
registered in radius server must be mapped to this local user of "admin"?

does this local user "admin" have to be registered with radius server too?

is the following configuration (file "users") for radius server good enough?

test   Auth-Type := Local
Cleartext-Password := "1234567890",
Juniper-Local-User-Name = "admin"


  

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Derick Winkworth
so the possibility does exist that with a combination of newer fabric and newer 
line card (a line card with better MQ memory bandwidth), that MX might be able 
to push more traffic per slot...







From: Richard A Steenbergen 
To: Derick Winkworth 
Cc: "juniper-nsp@puck.nether.net" 
Sent: Sun, August 29, 2010 1:34:00 PM
Subject: Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - 
coexistance of old DPCs and new Cards in same chassis -- looking for experience 
feedback

On Sun, Aug 29, 2010 at 07:03:59AM -0700, Derick Winkworth wrote:
> Has this always been the case with the SCBs?  Will there not be newer 
> SCBs that can run faster?  I've always heard that the MX series could 
> potentially run 240gbps per slot but would require SCB upgrade and 
> newer line cards... We're not there yet, but I'm wondering if its 
> true.  it sounds like below that we are talking about existing SCBs 
> which means the MX is limited to 120G per slot.

Until now each PFE has only needed 10G total bandwidth (per I-chip, * 4 
per DPC), so the fabric has been more than sufficient while still 
providing N+1. My understanding is that even with a new fabric card 
you'll still be limited to the 35G from the MQ memory bandwidth limit 
(just like you are with MX240/MX480), so the only difference will be a) 
you'll get fabric redundancy back, and b) you'll get support for future 
cards (like 100GE, etc).

Another thing I forgot to mention is that the old ADPC I-chip cards can 
still only talk to the same number of SCB's that they did originally (2x 
on MX960, 1x on MX240/480). This means that when you're running mixed 
I-chip and Trio cards in the same chassis, in say for example an MX960, 
all traffic going to/from an I-chip card will stay on 2 out of 3 SCBs, 
and only the Trio-to-Trio traffic will be able to use the 3rd SCB. If 
all of your traffic is going between a Trio card and other I-chip cards, 
this will obviously bottleneck your Trio capacity at 20G per PFE (minus 
overhead). Supposedly there is an intelligent fabric request/grant 
system, so hopefully the Trio PFEs are smart enough to use more capacity 
on the 3rd SCB for trio-to-trio traffic is the first 2 are being loaded 
up with I-chip card traffic.

You can also use the hidden command "show chassis fabric statistics" to 
monitor fabric utilization and drops. The output is pretty difficult to 
parse, you have to look at it per-plane, and it isn't in XML so you 
can't even easily write an op script for it, but it's still better than 
nothing. 

Hopefully Juniper will add a better fabric utilization command, ideally 
with something that tracks the peak rate ever seen too (like Cisco 
does), for example:

cisco6509#show platform hardware capacity fabric 
Switch Fabric Resources
  Bus utilization: current: 13%, peak was 54% at 08:47:31 UTC Fri Jun 25 2010
  Fabric utilization: IngressEgress
Module  Chanl  Speed  rate  peak rate  peak  
1   020G1%6% @21:14 06Apr101%   10% @20:14 13Feb10
2   020G   10%   33% @21:15 21Mar100%   31% @20:10 24May10
2   120G2%   52% @03:48 30Apr10   14%   98% @10:20 09Jun10
3   020G   19%   40% @20:38 21Mar10   14%   25% @01:02 09Jul10
3   120G4%   37% @10:42 09Jan101%   61% @02:52 20Dec09
4   020G   27%   51% @20:30 14Jul101%9% @17:04 03May10
4   120G2%   60% @12:12 13May10   34%   82% @01:33 29Apr10
5   020G0%5% @18:51 14Feb100%   21% @18:51 14Feb10
6   020G2%   17% @03:07 29Jun10   19%   52% @17:50 14Jul10
6   120G0%   42% @10:22 20Apr100%   73% @02:25 28Mar10
7   020G6%   33% @10:20 09Jun10   26%   58% @02:25 19Aug10
7   120G   35%   51% @19:38 14Jul101%6% @16:55 03May10


Or at least expose and XML-ify the current one so we can script up 
something decent.

-- 
Richard A Steenbergen   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Richard A Steenbergen
On Sun, Aug 29, 2010 at 07:03:59AM -0700, Derick Winkworth wrote:
> Has this always been the case with the SCBs?  Will there not be newer 
> SCBs that can run faster?  I've always heard that the MX series could 
> potentially run 240gbps per slot but would require SCB upgrade and 
> newer line cards... We're not there yet, but I'm wondering if its 
> true.  it sounds like below that we are talking about existing SCBs 
> which means the MX is limited to 120G per slot.

Until now each PFE has only needed 10G total bandwidth (per I-chip, * 4 
per DPC), so the fabric has been more than sufficient while still 
providing N+1. My understanding is that even with a new fabric card 
you'll still be limited to the 35G from the MQ memory bandwidth limit 
(just like you are with MX240/MX480), so the only difference will be a) 
you'll get fabric redundancy back, and b) you'll get support for future 
cards (like 100GE, etc).

Another thing I forgot to mention is that the old ADPC I-chip cards can 
still only talk to the same number of SCB's that they did originally (2x 
on MX960, 1x on MX240/480). This means that when you're running mixed 
I-chip and Trio cards in the same chassis, in say for example an MX960, 
all traffic going to/from an I-chip card will stay on 2 out of 3 SCBs, 
and only the Trio-to-Trio traffic will be able to use the 3rd SCB. If 
all of your traffic is going between a Trio card and other I-chip cards, 
this will obviously bottleneck your Trio capacity at 20G per PFE (minus 
overhead). Supposedly there is an intelligent fabric request/grant 
system, so hopefully the Trio PFEs are smart enough to use more capacity 
on the 3rd SCB for trio-to-trio traffic is the first 2 are being loaded 
up with I-chip card traffic.

You can also use the hidden command "show chassis fabric statistics" to 
monitor fabric utilization and drops. The output is pretty difficult to 
parse, you have to look at it per-plane, and it isn't in XML so you 
can't even easily write an op script for it, but it's still better than 
nothing. 

Hopefully Juniper will add a better fabric utilization command, ideally 
with something that tracks the peak rate ever seen too (like Cisco 
does), for example:

cisco6509#show platform hardware capacity fabric 
Switch Fabric Resources
  Bus utilization: current: 13%, peak was 54% at 08:47:31 UTC Fri Jun 25 2010
  Fabric utilization: IngressEgress
Module  Chanl  Speed  rate  peak rate  peak   
1   020G1%6% @21:14 06Apr101%   10% @20:14 13Feb10
2   020G   10%   33% @21:15 21Mar100%   31% @20:10 24May10
2   120G2%   52% @03:48 30Apr10   14%   98% @10:20 09Jun10
3   020G   19%   40% @20:38 21Mar10   14%   25% @01:02 09Jul10
3   120G4%   37% @10:42 09Jan101%   61% @02:52 20Dec09
4   020G   27%   51% @20:30 14Jul101%9% @17:04 03May10
4   120G2%   60% @12:12 13May10   34%   82% @01:33 29Apr10
5   020G0%5% @18:51 14Feb100%   21% @18:51 14Feb10
6   020G2%   17% @03:07 29Jun10   19%   52% @17:50 14Jul10
6   120G0%   42% @10:22 20Apr100%   73% @02:25 28Mar10
7   020G6%   33% @10:20 09Jun10   26%   58% @02:25 19Aug10
7   120G   35%   51% @19:38 14Jul101%6% @16:55 03May10


Or at least expose and XML-ify the current one so we can script up 
something decent.

-- 
Richard A Steenbergenhttp://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Mark Tinka
On Saturday, August 28, 2010 09:43:12 am Richard A 
Steenbergen wrote:

> At this point all I can really say is being on
> the look out for it, and if you do see it make sure
> Juniper looks at it quickly because it goes away on its
> own and leave no evidence for them to debug it. :)

The perfect crime :-).

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] (no subject)

2010-08-29 Thread vishal nijapkar

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] juniper-nsp Digest, Vol 93, Issue 40

2010-08-29 Thread Haider Ali
HI Friends
I have to implement SRX650 in cluster but facing issues regarding that. I was 
using JUNOS 10.0R3.10 on SRX650 cluster, but faced multiple issue like, kernel 
shooting to 95%, many of the services were not working properly, unable to see 
the routes in the routing table, although the next hop is reachable and active 
at that time, core-dump was generated by SRX naming  “flowd_octeon_hm”. When I 
saw the output of “show system process summary”, I can see there is a process 
naming “flowd_octeon_hm” using around 1039% of the WCPU. I have opened a ticket 
to Juniper Networks and they suggested me to move to JUNOS 10.1R3.7, as this 
core-dump was generated due to a bug inside the JUNOS 10.0R3.10. But after 
upgrading the JUNOS to 10.1R3.7, I can see the same process using the same 
amount of WCPU, although the boxes are not in production, so no traffic is 
there on the boxes, but the WCPU still using the same amount of memory. 
Latterly when I told JTAC regarding this, they found that its caused due to a 
bug inside the recommended JUNOS also. So they said to move to Junos 10.2. I 
just want to know if anyone have observed this kind of behavior of SRX650 in 
cluster ? If yes, please share the procedure and steps which you implemented to 
get this problem resolved.
 
Concerns:
1)FTP service was not working properly on SRX650, although the ALG was 
disabled. There will be 800+ FTP session when the boxes will be placed in 
production.
2)Kernel shoots up to 95% with minimal amount of traffic on SRX650 Cluster 
(traffic around 25-30 MB)
3)SRX-650 Cluster not passing the policy traffic logs to NSM-2010.2
4)I have to enable all the logging on the boxes, the IDP logging and the 
policy logging.
 
NOTE: Please suggest the JUNOS version which caters the resolution of all the 
above mentioned issues.
 
Please share your comments on this.

Regards

Haider Ali
0092-331-2827010
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Derick Winkworth
Has this always been the case with the SCBs?  Will there not be newer SCBs that 
can run faster?  I've always heard that the MX series could potentially run 
240gbps per slot but would require SCB upgrade and newer line cards... We're 
not 
there yet, but I'm wondering if its true.   it sounds like below that we are 
talking about existing SCBs which means the MX is limited to 120G per slot. 






From: Richard A Steenbergen 
To: Pavel Lunin 
Cc: "juniper-nsp@puck.nether.net" 
Sent: Sun, August 29, 2010 1:39:18 AM
Subject: Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - 
coexistance of old DPCs and new Cards in same chassis -- looking for experience 
feedback

On Sun, Aug 29, 2010 at 02:29:29AM +0400, Pavel Lunin wrote:
> My hypotheses is MQ can actually do twice as much: 65 Mpps from the 
> interfaces to back-plane and 65 backwards. Otherwise you'll never get 
> 30 Gbps FD with MPC1. But this knowledge is too burdensome for sales 
> people, because if you don't know it, you can just multiply 65 by the 
> number of chips in a box and get the right pps number. One could 
> hardly understand that each MQ actually does twice as much work but 
> each packet passes two MQ and you need multiply and than divide by 2 
> accordingly.

I got some replies off-list which helped shed some light on the Trio 
capabilities, so with their permission I will summarize the major points 
for the archives:

* Each Trio PFE is composed of the following ASICs:

  - MQ: Handles the packet memory, talks to the chassis fabric and the 
WAN ports, handles port-based QoS, punts first part of the packet 
to the LU chip for routing lookups.
  - LU: Lookup ASIC which does all IP routing lookups, MAC lookups, 
label switching, firewall matching, policing, accounting, etc.
  - QX: (optional) Implements the fine grained queueing/HQoS stuff.
NOT included on the 16-port 10GE MPC.
  - IX: (optional) Sits in front of the MQ chip to handle GigE ports.

* The Trio PFE is good for around 55Mpps of lookups, give or take, 
  depending on the exact operations being performed.

* The MQ chip can do around 70Gbps, give or take depending on the 
  packet size. Certain packet sizes can make it all the way to 80Gbps, 
  inconvenient packet sizes can bring it down below 70G by the time you 
  figure in overhead, but the jist is around 70Gbps. This limit is set 
  by the bandwidth of the packet memory. The quoted literature capacity 
  of 60Gbps is intended to be a "safe" number that can always be met.

* The 70G of MQ memory bandwidth is shared between the fabric facing 
  and WAN facing ports, giving you a bidirectional max of 35Gbps each 
  if you run 100% fabric<->wan traffic. If you do locally switched wan->
  wan traffic, you can get the full 70Gbps. On a fabricless chassis like 
  the MX80, that is how you get the entire amount.

* The MX960 can only provide around 10Gbps per SCB to each PFE, so it 
  needs to run all 3 SCBs actively to get to 30Gbps. If you lose an SCB, 
  it drops to 20Gbps, etc. This is pre cell overhead, so the actual 
  bandwidth is less (for example, around 28Gbps for 1500 byte packets).

* The MX240 and MX480 provide 20Gbps of bandwidth per SCB to each PFE, 
  and will run both actively to get to around 40Gbps (minus the above 
  overhead). Of course the aforementioned 35Gbps memory limit still 
  applies, so even though you have 40Gbps of fabric on these chassis 
  you'll still top out at 35Gbps if you do all fabric<->wan traffic.

* Anything that is locally switched counts against the LU capacity and 
  the MQ capacity, but not the fabric capacity. As long as you don't 
  exhaust the MQ/fabric, you can get line rate out of the WAN 
  interfaces. For example, 30Gbps of fabric switched + 10Gbps of locally 
  switched traffic on a MX240 or MX480 will not exceed the MQ or fabric 
  capacity and will give you bidirectional line rate.

* I'm still hearing mixed information about egress filters affecting 
  local switching, but the latest and most authorative answer is that 
  it DOESN'T actually affect local switching. Everything that can be 
  locally switched supposedly is, including tunnel encapsulation, so if 
  you receive a packet, tunnel it, and send it back out locally, you get 
  100% free tunneling with no impact to your other capacity.

I think that was everything. And if they aren't planning to add it 
already, please join me in asking them to add a way to view fabric 
utilization, as it would really make managing the local vs fabric 
capacities a lot easier.

-- 
Richard A Steenbergen   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://pu

Re: [j-nsp] Multicast PIM RP state

2010-08-29 Thread Chris Evans
Stacy,

Another question about the tunnel interfaces if I may.. To finish my thought
about the interfaces, when I look at the 'monitor interface traffic' I only
see the receiving side showing traffic on MT tunnel interface. Does this
mean exactly what I believe it means, which is only the receiver side tunnel
PIC resources will be utilized?

Thanks again

Chris

On Thu, Aug 26, 2010 at 12:16 AM, Smith W. Stacy  wrote:

>
> On Aug 25, 2010, at 8:43 PM, Chris Evans wrote:
> > I'm testing a mVPN. It uses the multicast tunnel (mt) interfaces for the
> traffic. When I do a 'monitor interface traffic' I only see packets on the
> receiving end of the tunnel hitting the MT interface.
>
> You might end up having to sum the traffic statistics for all of the
> logical interfaces (mt, pd, pe, ls, lt, gr, etc.) that are provided by the
> specific Tunnel or Services PIC.
>
> > My question is, I assume there is a performance limitation associated
> with the MT interface. When I do a 'show interface' it states that this
> interface is 800Mbps capable, so I assume this is the max aggregate traffic
> that this PIC can service?
>
> The throughput capabilities will depend on the type and model of Tunnel or
> Services PIC that is providing the mt interface. I'm not sure the 800 Mbps
> output you see in 'show interface'  is an accurate reflection of the
> performance capability of the Tunnel or Services PIC. Tunnel PICs are
> generally limited by the aggregate FPC throughput of the FPC in which they
> are installed rather than having a PIC-specific throughput limitation.
>
> --Stacy
>
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Pavel Lunin
> > > * The Trio PFE is good for around 55Mpps of lookups, give or take,
> > >  depending on the exact operations being performed.
> >
> > 55, not 65? Anyway, this is what I can't understand (maybe because of
> > my not-native English). When you say 'give or take', you mean it can
> > only do 55/65 for both directions or 55/65 for ethernet->backplane and
> > 55/65 backwards?
>
> 55 million PACKETS/sec of lookups per PFE, no directionality to that.
> Lookups are done at ingress. And 55 is what I'm told, not 65.
>

Thank you for this clarification.

Yeah, I understand that ingress LU performs lookup. I meant the case of
MX80. It's actually the case of local switching in MPC1 but with twice as
many interfaces. Where it's backplane->ethernet direction in MPC and LU does
not do lookups, in MX80 it's built-in-XE->MIC direction and LU must perform
lookups. Having said MQ is capable to transfer close to 70 Gigs, in case of
small packets the LU limitation comes into play much earlier (when MQ only
transfers close to 30 Gigs), doesn't it?

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Richard A Steenbergen
On Sun, Aug 29, 2010 at 11:00:55AM +0400, Pavel Lunin wrote:
> 
> > * The Trio PFE is good for around 55Mpps of lookups, give or take,
> >  depending on the exact operations being performed.
> 
> 55, not 65? Anyway, this is what I can't understand (maybe because of 
> my not-native English). When you say 'give or take', you mean it can 
> only do 55/65 for both directions or 55/65 for ethernet->backplane and 
> 55/65 backwards?

55 million PACKETS/sec of lookups per PFE, no directionality to that. 
Lookups are done at ingress. And 55 is what I'm told, not 65.

> If this is an LU overall limit for both direction than either I can't 
> manage to convert pps to bps (65pps is about 30 Gigs for 64-byte 
> packets, isn't it) or everything has is twice less performance than we 
> think (not likely though).

Bandwidth is a completely different limitation, which comes from the MQ 
and/or fabric limits. Technically 4 ports of 10GE would exceed the 
55Mpps, 14.488Mpps * 4 = 57.952Mpps, so you wouldn't quite be able to 
get line rate on small packets even with local switching.

-- 
Richard A Steenbergenhttp://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] New 16port 10G Card and new MPC with 4x10G MIC Cards - coexistance of old DPCs and new Cards in same chassis -- looking for experience feedback

2010-08-29 Thread Pavel Lunin
Thanks, Richard.

2010/8/29 Richard A Steenbergen 

>
> * Each Trio PFE is composed of the following ASICs:
>
>  - MQ: Handles the packet memory, talks to the chassis fabric and the
>WAN ports, handles port-based QoS, punts first part of the packet
>to the LU chip for routing lookups.
>  - LU: Lookup ASIC which does all IP routing lookups, MAC lookups,
>label switching, firewall matching, policing, accounting, etc.
>  - QX: (optional) Implements the fine grained queueing/HQoS stuff.
>NOT included on the 16-port 10GE MPC.
>  - IX: (optional) Sits in front of the MQ chip to handle GigE ports.
>


Here is another joke about '3D' name pronounced by Juniper's people: it's
called 3D because it actually consists of four chips.



> * The Trio PFE is good for around 55Mpps of lookups, give or take,
>  depending on the exact operations being performed.
>

55, not 65? Anyway, this is what I can't understand (maybe because of my
not-native English). When you say 'give or take', you mean it can only do
55/65 for both directions or 55/65 for ethernet->backplane and 55/65
backwards?

If this is an LU overall limit for both direction than either I can't manage
to convert pps to bps (65pps is about 30 Gigs for 64-byte packets, isn't it)
or everything has is twice less performance than we think (not likely
though).

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp