Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Saku Ytti
On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:

 Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on trio is
 huge increment - it is in realm of ~5M routes(since they use dynamic memory
 allocation to fill up with routes only) and more than 1M labeled prefix

I don't think this is apples to apples. The 16MB RLDRAM is just for jtree,
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.

I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio scale.

I'm not sure I either understand why it matters if route is labeled or not, if
each route has unique label, then it means you're wasting NH space, but if you
are doing next-hop-self and advertising only loopback labels, then I don't
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is sprayed
across all four RLDRAM devices).

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX Command

2013-09-24 Thread Ben Dale
Just blew the dust off it and it still works ; )

http://pastebin.com/xiszACPf

If you're applying this to a chassis cluster, you may need to replace the line:

for-each ($policies-list/security-context/policies) {

with 

for-each ($policies-list/multi-routing-engine-item/security-context/policies) {

Enjoy,

Ben

On 24/09/2013, at 4:43 PM, Maarten van der Hoek maar...@vanderhoek.nl wrote:

 Hi Ben,
 
 Did you succeed in building that script ?
 (e.g. do you have it somewhere ? ;-) )
 
 We've been playing with exports and then import in Excel...but still not
 very nice.. 
 A better solution would be nice.
 (we can't you Junos-Space / or so because most deployments are in separate
 Small / Branch offices)
 
 Brgds,
 
 Maarten van der Hoek
 
 -Oorspronkelijk bericht-
 Van: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] Namens Ben
 Dale
 Verzonden: dinsdag 24 september 2013 6:46
 Aan: Edward Dore
 CC: juniper-nsp@puck.nether.net; Harri Makela
 Onderwerp: Re: [j-nsp] SRX Command
 
 After I spent a bit of time building an op script to print policy matches
 out in a nicely formatted table, I notice that this feature is now available
 for all policies even without the then count action from 12.1:
 
 show security policies hit-count
 
 Cheers,
 
 Ben
 
 On 24/09/2013, at 8:45 AM, Edward Dore
 edward.d...@freethought-internet.co.uk wrote:
 
 You'll need to add the count action to the then statement on each
 security policy if you want to track the number of times that the policy has
 been matched.
 
 Edward Dore
 Freethought Internet
 
 On 23 Sep 2013, at 23:08, Harri Makela wrote:
 
 Hi All
 
 Is there any command in SRX which I can use to check number of times FW
 policy has been used. Actually I want to clear all FW policies which are
 not being used for last 12 months or so.  I don`t know much about scripting
 but can try to get some help if I can think of a command which can be rung
 through different zones combinations.
 
 
 Thanks in Advance !
 HM
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LACP/LAG Between MX and Cisco

2013-09-24 Thread Per Granath
Hi,

Keep in mind that SRX and MX/MPC use different command hierarchy for the load 
balancing hash config, which means your lab will not be useful.

SRX (and MX/DPC) use hash-key
MX/MPC use enhanced-hash-key

The hash is used on the ingress card of the MX (which might not be the card 
connected to you 3750).

http://kb.juniper.net/InfoCenter/index?page=contentid=KB24339

http://www.junosandme.net/article-junos-load-balancing-part-1-introduction-105738134.html


-Original Message-
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of 
Keith
Sent: Tuesday, September 24, 2013 1:38 AM
To: juniper-nsp@puck.nether.net
Subject: [j-nsp] LACP/LAG Between MX and Cisco

Hi.

We have a 3750X and an MX480 connected together.

As the gig link between the two is now approaching capacity we will be turning 
up a LAG between the two.

As the traffic is all coming in from the internet through the MX towards the 
3750 and customer eyeballs, I'm wondering on the 'best' way to load balance 
that traffic coming from the MX to the 3750.

I have things in my lab going, but there are lots of load-balancing options on 
the 3750 and the juniper (srx240) in my lab does not have the same options as 
the MX we use.

In reading and experimenting, I get different results in my lab but as there 
are more options on MX than the srx, I am unsure of what load-balancing options 
would work best in this scenario.

Using a vlan on the cisco with an IP address, connecting two ports in an 
channel and connecting both ports to the MX, and have an IP address on ae0.

Thanks,
Keith




___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Krasimir Avramski
Agree.. other elements like counters, filters, descriptors etc .. but it is
dynamic allocation which isn't  the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although  there is a
workaround(
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html)
for
ichip I am calculating the worst case scenario with unique inner vpn label
usage with composite nexthops.


Best Regards,
Krasi


On 24 September 2013 09:40, Saku Ytti s...@ytti.fi wrote:

 On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:

  Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on trio
 is
  huge increment - it is in realm of ~5M routes(since they use dynamic
 memory
  allocation to fill up with routes only) and more than 1M labeled prefix

 I don't think this is apples to apples. The 16MB RLDRAM is just for jtree,
 while 256MB in trio has lot more than just ktree, and some elements are
 sprayed across the 4*64MB devices which make up the 256MB RDLRAM.

 I'd be quite comfortable with 2M FIB throughout the lifecycle of current
 generation, but I've never heard JNPR quote anything near this for trio
 scale.

 I'm not sure I either understand why it matters if route is labeled or
 not, if
 each route has unique label, then it means you're wasting NH space, but if
 you
 are doing next-hop-self and advertising only loopback labels, then I don't
 think labeled route should be more expensive.
 (NH lives in RLDRAM in Trio as well, and I believe it specifically is
 sprayed
 across all four RLDRAM devices).

 --
   ++ytti
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX Command

2013-09-24 Thread Maarten van der Hoek
Hi Ben,

Did you succeed in building that script ?
(e.g. do you have it somewhere ? ;-) )

We've been playing with exports and then import in Excel...but still not
very nice.. 
A better solution would be nice.
(we can't you Junos-Space / or so because most deployments are in separate
Small / Branch offices)

Brgds,

Maarten van der Hoek

-Oorspronkelijk bericht-
Van: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] Namens Ben
Dale
Verzonden: dinsdag 24 september 2013 6:46
Aan: Edward Dore
CC: juniper-nsp@puck.nether.net; Harri Makela
Onderwerp: Re: [j-nsp] SRX Command

After I spent a bit of time building an op script to print policy matches
out in a nicely formatted table, I notice that this feature is now available
for all policies even without the then count action from 12.1:

show security policies hit-count

Cheers,

Ben

On 24/09/2013, at 8:45 AM, Edward Dore
edward.d...@freethought-internet.co.uk wrote:

 You'll need to add the count action to the then statement on each
security policy if you want to track the number of times that the policy has
been matched.
 
 Edward Dore
 Freethought Internet
 
 On 23 Sep 2013, at 23:08, Harri Makela wrote:
 
 Hi All
 
 Is there any command in SRX which I can use to check number of times FW
policy has been used. Actually I want to clear all FW policies which are
not being used for last 12 months or so.  I don`t know much about scripting
but can try to get some help if I can think of a command which can be rung
through different zones combinations.
 
 
 Thanks in Advance !
 HM
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos BNG PPPoE inside a VPLS

2013-09-24 Thread Mark Tinka
On Tuesday, September 17, 2013 04:05:50 PM Adrien Desportes 
wrote:

 Hello William,
 
 Before 13.2 you would have to use an external loop to
 terminate the vpls on one side and the ppp on the other
 side (as lt- interface does not support the proper
 encapsulation for ppp).
 
 Starting 13.2 (that was just released), if you use L2VPN
 rather than VPLS to backhaul your DSLAM VLAN, the below
 feature might do the job w/o consuming ports for the
 external hairpin:
 
 http://www.juniper.net/techpubs/en_US/junos13.2/topics/co
 ncept/pseudowire-subscriber-interfaces-overview.html

Now I really don't have to have VPLS at all.

Happy days :-).

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] NAT on MX platforms?

2013-09-24 Thread Mark Tinka
On Wednesday, September 18, 2013 06:13:06 PM rkramer wrote:

 I currently use MX240's throughout my routing environment
 today, and I'm looking to upgrade my existing NAT boxes,
 which are Cisco ASR's.  They are running out of
 horsepower, and from what I'm seeing, MS-DPC's on MX's
 provide more than enough capacity...  I'm not planning
 to do any firewalling at the NAT locations, just nat
 pools and 1:1 nat translations, with some 6to4 thrown in
 for good measure.

Just for completeness, have you already looked at the larger 
Cisco ESP's for the ASR1000 platform? Not sure if you're 
running hardware where the ESP is modular.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX Command

2013-09-24 Thread Harri Makela
Thanks for lookup

We have JUNOS Software Release [10.4R5.5] and it doesn`t look like that we have 
the option indictaed in last mail

admin@SRX-3600-P show security policies ?
Possible completions:
  [Enter]    Execute this command
  detail   Show the detailed information
  from-zone    Show the policy information matching the given source 
zone
  policy-name  Show the policy information matching the given policy 
name
  to-zone  Show the policy information matching the given 
destination zone
  |    Pipe through a command
{primary:node0}
admin@SRX-3600-P show security policies hit
   ^

I can capture all duplicate policies and delete which are not required for same 
flow but the ones which are not being used and are there for nothing, I would 
like to delete them. Not sure how I can accomlpish that with a JUNOS command 
which I have to run in parallel with a shell script.

Looking forward to get some feedback.

Thanks
HM






 From: Ben Dale bd...@comlinx.com.au
To: Edward Dore edward.d...@freethought-internet.co.uk 
Cc: Harri Makela harri_mak...@yahoo.com; juniper-nsp@puck.nether.net 
juniper-nsp@puck.nether.net 
Sent: Tuesday, 24 September 2013, 5:45
Subject: Re: [j-nsp] SRX Command
 

After I spent a bit of time building an op script to print policy matches out 
in a nicely formatted table, I notice that this feature is now available for 
all policies even without the then count action from 12.1:

show security policies hit-count

Cheers,

Ben

On 24/09/2013, at 8:45 AM, Edward Dore edward.d...@freethought-internet.co.uk 
wrote:

 You'll need to add the count action to the then statement on each 
 security policy if you want to track the number of times that the policy has 
 been matched.
 
 Edward Dore 
 Freethought Internet 
 
 On 23 Sep 2013, at 23:08, Harri Makela wrote:
 
 Hi All
 
 Is there any command in SRX which I can use to check number of times FW 
 policy has been used. Actually I want to clear all FW policies which are 
 not being used for last 12 months or so.  I don`t know much about scripting 
 but can try to get some help if I can think of a command which can be rung 
 through different zones combinations.
 
 
 Thanks in Advance !
 HM
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Nitzan Tzelniker
Hi,

The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst

Nitzan


On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski kr...@smartcom.bgwrote:

 Agree.. other elements like counters, filters, descriptors etc .. but it is
 dynamic allocation which isn't  the case with ichip - 16M bank for
 firewalls , 16M for jtree with fixed regions. Although  there is a
 workaround(

 http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html
 )
 for
 ichip I am calculating the worst case scenario with unique inner vpn label
 usage with composite nexthops.


 Best Regards,
 Krasi


 On 24 September 2013 09:40, Saku Ytti s...@ytti.fi wrote:

  On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:
 
   Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
 trio
  is
   huge increment - it is in realm of ~5M routes(since they use dynamic
  memory
   allocation to fill up with routes only) and more than 1M labeled prefix
 
  I don't think this is apples to apples. The 16MB RLDRAM is just for
 jtree,
  while 256MB in trio has lot more than just ktree, and some elements are
  sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
 
  I'd be quite comfortable with 2M FIB throughout the lifecycle of current
  generation, but I've never heard JNPR quote anything near this for trio
  scale.
 
  I'm not sure I either understand why it matters if route is labeled or
  not, if
  each route has unique label, then it means you're wasting NH space, but
 if
  you
  are doing next-hop-self and advertising only loopback labels, then I
 don't
  think labeled route should be more expensive.
  (NH lives in RLDRAM in Trio as well, and I believe it specifically is
  sprayed
  across all four RLDRAM devices).
 
  --
++ytti
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Krasimir Avramski
We are aware ppc on mx80 is slower than intel REs... but the original
question was for scalability not for performance/convergence.
Take a look at newer MX104 for more RE performance.

Krasi


On 24 September 2013 17:18, Nitzan Tzelniker nitzan.tzelni...@gmail.comwrote:

 Hi,

 The problem with the MX80 is not the FIB size but the slow RE
 The time it take to receive full routing table is long and to put it into
 the FIB is even worst

 Nitzan


 On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski kr...@smartcom.bgwrote:

 Agree.. other elements like counters, filters, descriptors etc .. but it
 is
 dynamic allocation which isn't  the case with ichip - 16M bank for
 firewalls , 16M for jtree with fixed regions. Although  there is a
 workaround(

 http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html
 )
 for
 ichip I am calculating the worst case scenario with unique inner vpn label
 usage with composite nexthops.


 Best Regards,
 Krasi


 On 24 September 2013 09:40, Saku Ytti s...@ytti.fi wrote:

  On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:
 
   Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
 trio
  is
   huge increment - it is in realm of ~5M routes(since they use dynamic
  memory
   allocation to fill up with routes only) and more than 1M labeled
 prefix
 
  I don't think this is apples to apples. The 16MB RLDRAM is just for
 jtree,
  while 256MB in trio has lot more than just ktree, and some elements are
  sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
 
  I'd be quite comfortable with 2M FIB throughout the lifecycle of current
  generation, but I've never heard JNPR quote anything near this for trio
  scale.
 
  I'm not sure I either understand why it matters if route is labeled or
  not, if
  each route has unique label, then it means you're wasting NH space, but
 if
  you
  are doing next-hop-self and advertising only loopback labels, then I
 don't
  think labeled route should be more expensive.
  (NH lives in RLDRAM in Trio as well, and I believe it specifically is
  sprayed
  across all four RLDRAM devices).
 
  --
++ytti
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Paul Stewart
Not to hi-jack this thread but does anyone know *real-world* numbers yet
on the MX104 RE?  I know it has more memory and is supposed to be faster
but have no idea yet how much faster it really is?

We don't have any in our network yet but anxious to deploy one end of
year...

Thanks for any input...

Paul



On 2013-09-24 10:40 AM, Krasimir Avramski kr...@smartcom.bg wrote:

We are aware ppc on mx80 is slower than intel REs... but the original
question was for scalability not for performance/convergence.
Take a look at newer MX104 for more RE performance.

Krasi


On 24 September 2013 17:18, Nitzan Tzelniker
nitzan.tzelni...@gmail.comwrote:

 Hi,

 The problem with the MX80 is not the FIB size but the slow RE
 The time it take to receive full routing table is long and to put it
into
 the FIB is even worst

 Nitzan


 On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski
kr...@smartcom.bgwrote:

 Agree.. other elements like counters, filters, descriptors etc .. but
it
 is
 dynamic allocation which isn't  the case with ichip - 16M bank for
 firewalls , 16M for jtree with fixed regions. Although  there is a
 workaround(

 
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuratio
n/junos-software-jtree-memory-repartitioning.html
 )
 for
 ichip I am calculating the worst case scenario with unique inner vpn
label
 usage with composite nexthops.


 Best Regards,
 Krasi


 On 24 September 2013 09:40, Saku Ytti s...@ytti.fi wrote:

  On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:
 
   Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
 trio
  is
   huge increment - it is in realm of ~5M routes(since they use
dynamic
  memory
   allocation to fill up with routes only) and more than 1M labeled
 prefix
 
  I don't think this is apples to apples. The 16MB RLDRAM is just for
 jtree,
  while 256MB in trio has lot more than just ktree, and some elements
are
  sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
 
  I'd be quite comfortable with 2M FIB throughout the lifecycle of
current
  generation, but I've never heard JNPR quote anything near this for
trio
  scale.
 
  I'm not sure I either understand why it matters if route is labeled
or
  not, if
  each route has unique label, then it means you're wasting NH space,
but
 if
  you
  are doing next-hop-self and advertising only loopback labels, then I
 don't
  think labeled route should be more expensive.
  (NH lives in RLDRAM in Trio as well, and I believe it specifically is
  sprayed
  across all four RLDRAM devices).
 
  --
++ytti
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Junos ospf question

2013-09-24 Thread R S


Hi

I s there a
way with Junos to manipolate OSPF metric to mark as unreachable a network 
received
through a particular path ?

I was told
that with Screenos was possible, but I’m wondering if it is true or not ?

 

Any
experience ? feedback ?


Regards
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Amos Rosenboim
To add on Nitzan's comment(we work together):
When everything is stable all is good.
But bounce a full table BGP session, and than bounce an IGP adjacency and you 
are in a lot of trouble.
This seems to be a combination of the (in)famous Junos software issue described 
extensively by RAS and a processor that is so slow that makes the software 
issue appear in much smaller environment than what is described by RAS.
Having said all this, run it with few thousands routes and it's a beast.
I think this box really changed the game for many small ISPs.

Cheers

Amos

Sent from my iPhone

On 24 Sep 2013, at 17:21, Nitzan Tzelniker 
nitzan.tzelni...@gmail.commailto:nitzan.tzelni...@gmail.com wrote:

Hi,

The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst

Nitzan


On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski 
kr...@smartcom.bgmailto:kr...@smartcom.bgwrote:

Agree.. other elements like counters, filters, descriptors etc .. but it is
dynamic allocation which isn't  the case with ichip - 16M bank for
firewalls , 16M for jtree with fixed regions. Although  there is a
workaround(

http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/junos-software-jtree-memory-repartitioning.html
)
for
ichip I am calculating the worst case scenario with unique inner vpn label
usage with composite nexthops.


Best Regards,
Krasi


On 24 September 2013 09:40, Saku Ytti s...@ytti.fimailto:s...@ytti.fi wrote:

On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:

Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
trio
is
huge increment - it is in realm of ~5M routes(since they use dynamic
memory
allocation to fill up with routes only) and more than 1M labeled prefix

I don't think this is apples to apples. The 16MB RLDRAM is just for
jtree,
while 256MB in trio has lot more than just ktree, and some elements are
sprayed across the 4*64MB devices which make up the 256MB RDLRAM.

I'd be quite comfortable with 2M FIB throughout the lifecycle of current
generation, but I've never heard JNPR quote anything near this for trio
scale.

I'm not sure I either understand why it matters if route is labeled or
not, if
each route has unique label, then it means you're wasting NH space, but
if
you
are doing next-hop-self and advertising only loopback labels, then I
don't
think labeled route should be more expensive.
(NH lives in RLDRAM in Trio as well, and I believe it specifically is
sprayed
across all four RLDRAM devices).

--
 ++ytti
___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Mark Tinka
On Tuesday, September 24, 2013 04:50:41 PM Paul Stewart 
wrote:

 Not to hi-jack this thread but does anyone know
 *real-world* numbers yet on the MX104 RE?  I know it has
 more memory and is supposed to be faster but have no
 idea yet how much faster it really is?
 
 We don't have any in our network yet but anxious to
 deploy one end of year...

MX104 is a PPC board as well. So don't expect the RE to be 
that much quicker than the MX80.

That said, unlike the MX80, the MX104's RE's are modular. So 
hopefully, in the future, when Juniper can grow x86-based 
RE's for the MX104 within budget, we shall see some speed on 
this platform's control and management planes.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX80 Route table Size

2013-09-24 Thread Luca Salvatore
This concerns me a little.  I'M about  to take a full table on a MX5.
Is it only an issue when the adjacencyis lost and we need to receive the
table again or will performance of the entire box be affected?
-- 
Luca 





On 25/09/13 12:18 AM, Nitzan Tzelniker nitzan.tzelni...@gmail.com
wrote:

Hi,

The problem with the MX80 is not the FIB size but the slow RE
The time it take to receive full routing table is long and to put it into
the FIB is even worst

Nitzan


On Tue, Sep 24, 2013 at 10:21 AM, Krasimir Avramski
kr...@smartcom.bgwrote:

 Agree.. other elements like counters, filters, descriptors etc .. but
it is
 dynamic allocation which isn't  the case with ichip - 16M bank for
 firewalls , 16M for jtree with fixed regions. Although  there is a
 workaround(

 
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration
/junos-software-jtree-memory-repartitioning.html
 )
 for
 ichip I am calculating the worst case scenario with unique inner vpn
label
 usage with composite nexthops.


 Best Regards,
 Krasi


 On 24 September 2013 09:40, Saku Ytti s...@ytti.fi wrote:

  On (2013-09-24 08:49 +0300), Krasimir Avramski wrote:
 
   Ichip(DPC) has 16-32M RLDRAM and holds 1M routes in FIB, so 256M on
 trio
  is
   huge increment - it is in realm of ~5M routes(since they use dynamic
  memory
   allocation to fill up with routes only) and more than 1M labeled
prefix
 
  I don't think this is apples to apples. The 16MB RLDRAM is just for
 jtree,
  while 256MB in trio has lot more than just ktree, and some elements
are
  sprayed across the 4*64MB devices which make up the 256MB RDLRAM.
 
  I'd be quite comfortable with 2M FIB throughout the lifecycle of
current
  generation, but I've never heard JNPR quote anything near this for
trio
  scale.
 
  I'm not sure I either understand why it matters if route is labeled or
  not, if
  each route has unique label, then it means you're wasting NH space,
but
 if
  you
  are doing next-hop-self and advertising only loopback labels, then I
 don't
  think labeled route should be more expensive.
  (NH lives in RLDRAM in Trio as well, and I believe it specifically is
  sprayed
  across all four RLDRAM devices).
 
  --
++ytti
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos BNG PPPoE inside a VPLS

2013-09-24 Thread Graham Brown
I've run into a very strange bug on the MX where PPP through a VPLS results
in the packets being mangled - affected circuits have been migrated to
L2VPNs. Although a fix is provided in 12.3R4 which we are currently testing
- I'll dig out the PR when I get into the office.

Graham Brown
Network Engineer
Snap Internet
Christchurch, New Zealand
Twitter - @mountainrescuer https://twitter.com/#!/mountainrescuer
LinkedIn http://www.linkedin.com/in/grahamcbrown


On 24 September 2013 21:46, Mark Tinka mark.ti...@seacom.mu wrote:

 On Tuesday, September 17, 2013 04:05:50 PM Adrien Desportes
 wrote:

  Hello William,
 
  Before 13.2 you would have to use an external loop to
  terminate the vpls on one side and the ppp on the other
  side (as lt- interface does not support the proper
  encapsulation for ppp).
 
  Starting 13.2 (that was just released), if you use L2VPN
  rather than VPLS to backhaul your DSLAM VLAN, the below
  feature might do the job w/o consuming ports for the
  external hairpin:
 
  http://www.juniper.net/techpubs/en_US/junos13.2/topics/co
  ncept/pseudowire-subscriber-interfaces-overview.html

 Now I really don't have to have VPLS at all.

 Happy days :-).

 Mark.

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp




-- 
Graham Brown
Twitter - @mountainrescuer https://twitter.com/#!/mountainrescuer
LinkedIn http://www.linkedin.com/in/grahamcbrown
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX Command

2013-09-24 Thread Ben Dale
Harri,

As per the link below - add then count to all your policies (using the 
following apply-group will do this quickly for you):

set groups COUNT-ALL security policies from-zone * to-zone * policy * 
then count
set apply-groups COUNT-ALL

If you install the op-script provided and run it after a month or so, it will 
show you pretty quickly which policies are being used, but if you don't want to 
use an op script, try:

run show security policies detail | match Policy:|zone|lookups

Again - the lookups field will only be there if the policy has count enabled.

Cheers,

Ben

On 24/09/2013, at 10:37 PM, Harri Makela harri_mak...@yahoo.com wrote:

 Thanks for lookup
 
 We have JUNOS Software Release [10.4R5.5] and it doesn`t look like that we 
 have the option indictaed in last mail
 
 admin@SRX-3600-P show security policies ?
 Possible completions:
   [Enter]Execute this command
   detail   Show the detailed information
   from-zoneShow the policy information matching the given source 
 zone
   policy-name  Show the policy information matching the given policy 
 name
   to-zone  Show the policy information matching the given 
 destination zone
   |Pipe through a command
 {primary:node0}
 admin@SRX-3600-P show security policies hit
^
 
 I can capture all duplicate policies and delete which are not required for 
 same flow but the ones which are not being used and are there for nothing, I 
 would like to delete them. Not sure how I can accomlpish that with a JUNOS 
 command which I have to run in parallel with a shell script.
 
 Looking forward to get some feedback.
 
 Thanks
 HM
 
 
 
 From: Ben Dale bd...@comlinx.com.au
 To: Edward Dore edward.d...@freethought-internet.co.uk 
 Cc: Harri Makela harri_mak...@yahoo.com; juniper-nsp@puck.nether.net 
 juniper-nsp@puck.nether.net 
 Sent: Tuesday, 24 September 2013, 5:45
 Subject: Re: [j-nsp] SRX Command
 
 After I spent a bit of time building an op script to print policy matches out 
 in a nicely formatted table, I notice that this feature is now available for 
 all policies even without the then count action from 12.1:
 
 show security policies hit-count
 
 Cheers,
 
 Ben
 
 On 24/09/2013, at 8:45 AM, Edward Dore 
 edward.d...@freethought-internet.co.uk wrote:
 
  You'll need to add the count action to the then statement on each 
  security policy if you want to track the number of times that the policy 
  has been matched.
  
  Edward Dore 
  Freethought Internet 
  
  On 23 Sep 2013, at 23:08, Harri Makela wrote:
  
  Hi All
  
  Is there any command in SRX which I can use to check number of times FW 
  policy has been used. Actually I want to clear all FW policies which are 
  not being used for last 12 months or so.  I don`t know much about 
  scripting but can try to get some help if I can think of a command which 
  can be rung through different zones combinations.
  
  
  Thanks in Advance !
  HM
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
  
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 
 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos BNG PPPoE inside a VPLS

2013-09-24 Thread Paul Stewart
Please do share...

We are looking at launching an MX480 with RE1800's for BNG functions
(PPPOE).  I'd really like to haul L2VPN's directly to the box and this
feature in 13.2 mentioned may be the solution...;)

Paul


On 2013-09-24 6:27 PM, Graham Brown juniper-...@grahambrown.info wrote:

I've run into a very strange bug on the MX where PPP through a VPLS
results
in the packets being mangled - affected circuits have been migrated to
L2VPNs. Although a fix is provided in 12.3R4 which we are currently
testing
- I'll dig out the PR when I get into the office.

Graham Brown
Network Engineer
Snap Internet
Christchurch, New Zealand
Twitter - @mountainrescuer https://twitter.com/#!/mountainrescuer
LinkedIn http://www.linkedin.com/in/grahamcbrown


On 24 September 2013 21:46, Mark Tinka mark.ti...@seacom.mu wrote:

 On Tuesday, September 17, 2013 04:05:50 PM Adrien Desportes
 wrote:

  Hello William,
 
  Before 13.2 you would have to use an external loop to
  terminate the vpls on one side and the ppp on the other
  side (as lt- interface does not support the proper
  encapsulation for ppp).
 
  Starting 13.2 (that was just released), if you use L2VPN
  rather than VPLS to backhaul your DSLAM VLAN, the below
  feature might do the job w/o consuming ports for the
  external hairpin:
 
  http://www.juniper.net/techpubs/en_US/junos13.2/topics/co
  ncept/pseudowire-subscriber-interfaces-overview.html

 Now I really don't have to have VPLS at all.

 Happy days :-).

 Mark.

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp




-- 
Graham Brown
Twitter - @mountainrescuer https://twitter.com/#!/mountainrescuer
LinkedIn http://www.linkedin.com/in/grahamcbrown
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp