Re: [j-nsp] Multihoming Using Juniper SRX 240

2012-11-20 Thread Farrukh Haroon
Dear Rehan

To complement Morgan's response.

It seems you are based in Saudi.  Here IGW allows ISPs to advertise a /25.
Assuming you already have your own /24 from RIPE, you can divide this into
two /25 and achieve reasonable 'load-sharing' in the inbound direction.  To
keep the traffic uniform make sure that the NAT rules use IP addresses from
both /25 ranges. E.g. if you have two proxy servers, you place on in each.

For outbound, you could do some sort of intelligent PBR with tracking
(FBF), with the appropriate switchover to the secondary ISP.  E.g. User
VLANs 1,2,3 go to ISP-A, Servers and User VLANs 4 and 5 go to ISP-B. If any
of the ISP is down, all traffic should go through the live ISP.

There is an example of using FBF to do this in the SRX Security config
guide and in the O'reilly  SRX security book (Chapter 11)
Regards

Farrukh Haroon
Riyadh,KSA


On Tue, Nov 20, 2012 at 9:37 AM, Rehan Rafi rrk@gmail.com wrote:

 Dear All,

 Kindly can you share some case studies for achieving multihoming setup in
 different ways.

 The setup we have is 2 SRXs in Active/Passive cluster with 2 ISP
 connections running BGP. We have multiple things in mind, we want
 to achieve:

 - Load balancing all traffic between 2 ISP connections, not sure if its
 possible or not?

 - Send/Receive traffic of some subnets through one ISP and for others
 through other ISP to maximum utilize both ISP links

 - In case of one ISP failure all traffic should divert to the other working
 ISP

 Your precious thoughts on these points will be appreciated. Ultimate goal
 is to achieve redundancy and maximum utilizing both ISP links.

 Looking forward for your replies

 --

 Regards,

 Rehan Rafi
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Fw: L2 Circuits accross domains

2012-11-20 Thread Alex Arseniev
You should have remote loopbacks also redistributed into LDP (if your 
transport label is from LDP).
In JUNOS, this does not happen by default, you must have LDP egress-policy 
for this to occur. By default, LDP announces only primary lo0.0 IP@.

Absent this, your L2circuits would show OL error (no outgoing label).
Before you ask, this is totally different to CSCO IOS which announces all 
routes (bar BGP ones) as LDP FECs.

HTH
Rgds
Alex

- Original Message - 
From: Peter Nyamukusa peternyamuk...@yahoo.com

To: juniper-nsp@puck.nether.net
Cc: pe...@nyamukusa.com
Sent: Tuesday, November 20, 2012 7:46 AM
Subject: [j-nsp] Fw: L2 Circuits accross domains


- Forwarded Message -

From: Peter Nyamukusa peternyamuk...@yahoo.com
To: juniper-nsp@puck.nether.net juniper-nsp@puck.nether.net
Sent: Tuesday, November 20, 2012 9:33 AM
Subject: L2 Circuits accross domains


Hi Folks,

I have an exsisting L3 / L2 MPLS network with Cisco ASR on the Core as PE 
router and Junper routers as PE router, i have been running l2circuits 
sucessfully for some time now with out any problems using the below configs 
on my PEs, I am running IS-IS as my IGP and BGP




[edit protocols l2circuit]
peter@xxx-PE1# show

}
neighbor 41.x.x.1 {
interface ge-0/0/2.2001 {
virtual-circuit-id 2001;
description XYZ L2;
no-control-word;

ignore-mtu-mismatch;

[edit interfaces ge-0/0/2]
peter@xxx-PE1# show
description Customers L2 Circuits;
vlan-tagging;
encapsulation vlan-ccc;
unit 2001 {
description XYZ L2;
encapsulation vlan-ccc;
vlan-id 2001;
}

Neighbor: 41.x.x.2
Interface Type St Time last up # Up trans
ge-0/0/2.2001(vc 2001) rmt Up Nov 13 15:29:31 2012 1
Remote PE: 41.x.x.2, Negotiated control-word: No
Incoming label: 299776, Outgoing label: 299776
Local interface: ge-0/0/2.2001, Status: Up, Encapsulation:
VLAN
Description: XYZ L2
Neighbor: 41.x.x.x.1
Interface Type St Time last up # Up trans
ge-0/0/2.2101(vc 2101) rmt Up Nov 13 15:29:28 2012 1
Remote PE: 41.x.x.1, Negotiated control-word: Yes (Null)
Incoming label: 299792, Outgoing label: 333568
Local interface: ge-0/0/2.2101, Status: Up, Encapsulation: VLAN
Description: ABC L2 - ANY POP



Now I am trying to extend these L2 circuits to anther MPLS Domain where we 
have direct Gigabit fibre connection I am using the same concept and 
establish ospf peering on the ASBR router with the remote ASN and 
redistributed my IGP so my loopbacks are seen by the PEs on both sides of 
the domains and establish ldp peering how ever the l2circuit is not coming 
up any help is appriciated as I have been working on this more than 24hrs 
and think that i am now a bit clouded



peter@yyy-BR1# run show l2circuit connections (ASN 1234)
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid NP -- interface h/w not present
MM -- mtu mismatch Dn -- down
EM -- encapsulation mismatch VC-Dn -- Virtual
circuit Down
CM -- control-word mismatch Up -- operational
VM -- vlan id mismatch CF -- Call admission control failure
OL -- no outgoing label IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC TM -- TDM misconfiguration
BK -- Backup Connection ST -- Standby Connection
CB -- rcvd cell-bundle size bad XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 5.1.1.2
Interface Type St Time last up # Up trans
ge-0/0/0.2900(vc 2900) rmt OL

peter@xxx-PE1# run show l2circuit connections (ASN4321)
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid NP -- interface h/w not present
MM -- mtu mismatch Dn -- down
EM -- encapsulation mismatch VC-Dn -- Virtual circuit Down
CM -- control-word mismatch Up -- operational
VM -- vlan id mismatch CF -- Call admission control failure
OL -- no outgoing label IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC TM -- TDM misconfiguration
BK -- Backup Connection ST -- Standby Connection
CB -- rcvd
cell-bundle size bad XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 5.1.1.1
Interface Type St Time last up # Up trans
ge-0/0/2.2900(vc 2900) rmt OL




---
| Kind Regards, |
| Peter
Nyamukusa  |
| MCSE-2000/2003, CCIP, CCDP, CCVP, CCNP,  |
| JNCIS-ent, JNCIS-er, JNCIS-Sec, JNCIA-Ex, Linux+, A+   |
---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Fw: L2 Circuits accross domains

2012-11-20 Thread Peter Nyamukusa
Thanks Alex,

I had already redistibuted all my loopback into my IGP and all were reachable
 

| Kind Regards,  |
| Peter Nyamukusa|
| MCSE-2000/2003, CCNP, CCIP, CCDP, CCVP,  |
| JNICIS-ent, JNCIS-er, JNCIS-Sec, JNCIA-Ex, Linux+, A+   |
-



 From: Alex Arseniev alex.arsen...@gmail.com
To: Peter Nyamukusa peternyamuk...@yahoo.com; juniper-nsp@puck.nether.net 
Sent: Tuesday, November 20, 2012 12:28 PM
Subject: Re: [j-nsp] Fw: L2 Circuits accross domains
 
You should have remote loopbacks also redistributed into LDP (if your transport 
label is from LDP).
In JUNOS, this does not happen by default, you must have LDP egress-policy for 
this to occur. By default, LDP announces only primary lo0.0 IP@.
Absent this, your L2circuits would show OL error (no outgoing label).
Before you ask, this is totally different to CSCO IOS which announces all 
routes (bar BGP ones) as LDP FECs.
HTH
Rgds
Alex

- Original Message - From: Peter Nyamukusa peternyamuk...@yahoo.com
To: juniper-nsp@puck.nether.net
Cc: pe...@nyamukusa.com
Sent: Tuesday, November 20, 2012 7:46 AM
Subject: [j-nsp] Fw: L2 Circuits accross domains


- Forwarded Message -

From: Peter Nyamukusa peternyamuk...@yahoo.com
To: juniper-nsp@puck.nether.net juniper-nsp@puck.nether.net
Sent: Tuesday, November 20, 2012 9:33 AM
Subject: L2 Circuits accross domains


Hi Folks,

I have an exsisting L3 / L2 MPLS network with Cisco ASR on the Core as PE 
router and Junper routers as PE router, i have been running l2circuits 
sucessfully for some time now with out any problems using the below configs on 
my PEs, I am running IS-IS as my IGP and BGP



[edit protocols l2circuit]
peter@xxx-PE1# show

}
neighbor 41.x.x.1 {
interface ge-0/0/2.2001 {
virtual-circuit-id 2001;
description XYZ L2;
no-control-word;

ignore-mtu-mismatch;

[edit interfaces ge-0/0/2]
peter@xxx-PE1# show
description Customers L2 Circuits;
vlan-tagging;
encapsulation vlan-ccc;
unit 2001 {
description XYZ L2;
encapsulation vlan-ccc;
vlan-id 2001;
}

Neighbor: 41.x.x.2
Interface Type St Time last up # Up trans
ge-0/0/2.2001(vc 2001) rmt Up Nov 13 15:29:31 2012 1
Remote PE: 41.x.x.2, Negotiated control-word: No
Incoming label: 299776, Outgoing label: 299776
Local interface: ge-0/0/2.2001, Status: Up, Encapsulation:
VLAN
Description: XYZ L2
Neighbor: 41.x.x.x.1
Interface Type St Time last up # Up trans
ge-0/0/2.2101(vc 2101) rmt Up Nov 13 15:29:28 2012 1
Remote PE: 41.x.x.1, Negotiated control-word: Yes (Null)
Incoming label: 299792, Outgoing label: 333568
Local interface: ge-0/0/2.2101, Status: Up, Encapsulation: VLAN
Description: ABC L2 - ANY POP



Now I am trying to extend these L2 circuits to anther MPLS Domain where we have 
direct Gigabit fibre connection I am using the same concept and establish ospf 
peering on the ASBR router with the remote ASN and redistributed my IGP so my 
loopbacks are seen by the PEs on both sides of the domains and establish ldp 
peering how ever the l2circuit is not coming up any help is appriciated as I 
have been working on this more than 24hrs and think that i am now a bit clouded


peter@yyy-BR1# run show l2circuit connections (ASN 1234)
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid NP -- interface h/w not present
MM -- mtu mismatch Dn -- down
EM -- encapsulation mismatch VC-Dn -- Virtual
circuit Down
CM -- control-word mismatch Up -- operational
VM -- vlan id mismatch CF -- Call admission control failure
OL -- no outgoing label IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC TM -- TDM misconfiguration
BK -- Backup Connection ST -- Standby Connection
CB -- rcvd cell-bundle size bad XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 5.1.1.2
Interface Type St Time last up # Up trans
ge-0/0/0.2900(vc 2900) rmt OL

peter@xxx-PE1# run show l2circuit connections (ASN4321)
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid NP -- interface h/w not present
MM -- mtu mismatch Dn -- down
EM -- encapsulation mismatch VC-Dn -- Virtual circuit Down
CM -- control-word mismatch Up -- operational
VM -- vlan id mismatch CF -- Call admission control failure
OL -- no outgoing label IB -- TDM incompatible bitrate
NC -- intf encaps not CCC/TCC TM -- TDM misconfiguration
BK -- Backup Connection ST -- Standby Connection
CB -- rcvd
cell-bundle size bad XX -- unknown

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 5.1.1.1
Interface Type St Time last up # Up trans
ge-0/0/2.2900(vc 2900) rmt OL




---
| Kind Regards, |
| Peter
Nyamukusa  |
| MCSE-2000/2003, CCIP, CCDP, CCVP, CCNP,      |
| JNCIS-ent, JNCIS-er, 

Re: [j-nsp] Fw: L2 Circuits accross domains

2012-11-20 Thread Alex Arseniev
This is not enough.
You must have LDP egress-policy and include these loopbacks there too
https://www.juniper.net/techpubs/software/junos/junos93/swconfig-mpls-apps/configuring-the-ldp-egress-policy.html
HTH
Thanks
Alex

  - Original Message - 
  From: Peter Nyamukusa 
  To: Alex Arseniev ; juniper-nsp@puck.nether.net 
  Sent: Tuesday, November 20, 2012 11:07 AM
  Subject: Re: [j-nsp] Fw: L2 Circuits accross domains


  Thanks Alex,

  I had already redistibuted all my loopback into my IGP and all were reachable

  

  | Kind Regards, |
  | Peter Nyamukusa |
  | MCSE-2000/2003, CCNP, CCIP, CCDP, CCVP, |
  | JNICIS-ent, JNCIS-er, JNCIS-Sec, JNCIA-Ex, Linux+, A+ |
  
-


--
  From: Alex Arseniev alex.arsen...@gmail.com
  To: Peter Nyamukusa peternyamuk...@yahoo.com; juniper-nsp@puck.nether.net 
  Sent: Tuesday, November 20, 2012 12:28 PM
  Subject: Re: [j-nsp] Fw: L2 Circuits accross domains


  You should have remote loopbacks also redistributed into LDP (if your 
transport label is from LDP).
  In JUNOS, this does not happen by default, you must have LDP egress-policy 
for this to occur. By default, LDP announces only primary lo0.0 IP@.
  Absent this, your L2circuits would show OL error (no outgoing label).
  Before you ask, this is totally different to CSCO IOS which announces all 
routes (bar BGP ones) as LDP FECs.
  HTH
  Rgds
  Alex

  - Original Message - From: Peter Nyamukusa 
peternyamuk...@yahoo.com
  To: juniper-nsp@puck.nether.net
  Cc: pe...@nyamukusa.com
  Sent: Tuesday, November 20, 2012 7:46 AM
  Subject: [j-nsp] Fw: L2 Circuits accross domains


  - Forwarded Message -

  From: Peter Nyamukusa peternyamuk...@yahoo.com
  To: juniper-nsp@puck.nether.net juniper-nsp@puck.nether.net
  Sent: Tuesday, November 20, 2012 9:33 AM
  Subject: L2 Circuits accross domains


  Hi Folks,

  I have an exsisting L3 / L2 MPLS network with Cisco ASR on the Core as PE 
router and Junper routers as PE router, i have been running l2circuits 
sucessfully for some time now with out any problems using the below configs on 
my PEs, I am running IS-IS as my IGP and BGP



  [edit protocols l2circuit]
  peter@xxx-PE1# show

  }
  neighbor 41.x.x.1 {
  interface ge-0/0/2.2001 {
  virtual-circuit-id 2001;
  description XYZ L2;
  no-control-word;

  ignore-mtu-mismatch;

  [edit interfaces ge-0/0/2]
  peter@xxx-PE1# show
  description Customers L2 Circuits;
  vlan-tagging;
  encapsulation vlan-ccc;
  unit 2001 {
  description XYZ L2;
  encapsulation vlan-ccc;
  vlan-id 2001;
  }

  Neighbor: 41.x.x.2
  Interface Type St Time last up # Up trans
  ge-0/0/2.2001(vc 2001) rmt Up Nov 13 15:29:31 2012 1
  Remote PE: 41.x.x.2, Negotiated control-word: No
  Incoming label: 299776, Outgoing label: 299776
  Local interface: ge-0/0/2.2001, Status: Up, Encapsulation:
  VLAN
  Description: XYZ L2
  Neighbor: 41.x.x.x.1
  Interface Type St Time last up # Up trans
  ge-0/0/2.2101(vc 2101) rmt Up Nov 13 15:29:28 2012 1
  Remote PE: 41.x.x.1, Negotiated control-word: Yes (Null)
  Incoming label: 299792, Outgoing label: 333568
  Local interface: ge-0/0/2.2101, Status: Up, Encapsulation: VLAN
  Description: ABC L2 - ANY POP



  Now I am trying to extend these L2 circuits to anther MPLS Domain where we 
have direct Gigabit fibre connection I am using the same concept and establish 
ospf peering on the ASBR router with the remote ASN and redistributed my IGP so 
my loopbacks are seen by the PEs on both sides of the domains and establish ldp 
peering how ever the l2circuit is not coming up any help is appriciated as I 
have been working on this more than 24hrs and think that i am now a bit clouded


  peter@yyy-BR1# run show l2circuit connections (ASN 1234)
  Layer-2 Circuit Connections:

  Legend for connection status (St)
  EI -- encapsulation invalid NP -- interface h/w not present
  MM -- mtu mismatch Dn -- down
  EM -- encapsulation mismatch VC-Dn -- Virtual
  circuit Down
  CM -- control-word mismatch Up -- operational
  VM -- vlan id mismatch CF -- Call admission control failure
  OL -- no outgoing label IB -- TDM incompatible bitrate
  NC -- intf encaps not CCC/TCC TM -- TDM misconfiguration
  BK -- Backup Connection ST -- Standby Connection
  CB -- rcvd cell-bundle size bad XX -- unknown

  Legend for interface status
  Up -- operational
  Dn -- down
  Neighbor: 5.1.1.2
  Interface Type St Time last up # Up trans
  ge-0/0/0.2900(vc 2900) rmt OL

  peter@xxx-PE1# run show l2circuit connections (ASN4321)
  Layer-2 Circuit Connections:

  Legend for connection status (St)
  EI -- encapsulation invalid NP -- interface h/w not present
  MM -- mtu mismatch Dn -- down
  EM -- encapsulation mismatch VC-Dn -- Virtual circuit Down
  CM -- control-word mismatch Up -- operational
  

[j-nsp] L3VPN PE-CE multihoming with OSPF

2012-11-20 Thread Leigh Porter
Hey Folks,

I have a somewhat typical duel PE and duel CE scenario. However, the two PE and 
two CE routers are all in the same subnet and the two PE routers routing 
instances can see eachother over OSPF.

I'm not sure if this is wise.

Should routing instances belonging to the same MPLS VPN be able to exchange 
routes over OSPF when OSPF is the PE-CE routing protocol and those OSPF routes 
will be part of the VPN's routing table and exported with BGP?

Could there then be an issue where a route is learned by PE-A from PE-B with 
OSPF, then from PE-B to PE-A with BGP and then around and around creating a 
route that never gets anywhere because it's self sustained?

--
Leigh Porter

__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
__

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] inline-jflow on MX MPC (Trio) - experiences?

2012-11-20 Thread Sebastian Wiesinger
Hello,

we're just setting up inline-jflow on MX Trio chipsets and I'm seeing
a few odd things:

1) Why is inline-jflow sending so many packets instead of putting more
   then one flow in one udp packet? Every ~5 seconds I get a LOT of UDP
   packets at the same time, many of them only containing 1 flow.

2) In Douglas Hanks Juniper MX Series book it is noted that the
   sampling rate for inline jflow must always be 1 (other rates are
   not valid). Still it seems to work with rate 1000 for example (and
   this is also used as example on the Juniper website).

3) The test collector is reporting missed flows. I'm not sure if that
   is a problem with the collector or if I'm really missing flows.
   Anyone else had this problem?

Any comments or experiences with inline jflow on MX Trio?

Regards

Sebastian

-- 
GPG Key: 0x93A0B9CE (F4F6 B1A3 866B 26E9 450A  9D82 58A2 D94A 93A0 B9CE)
'Are you Death?' ... IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE.
-- Terry Pratchett, The Fifth Elephant
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] inline-jflow on MX MPC (Trio) - experiences?

2012-11-20 Thread Saku Ytti
On (2012-11-20 16:54 +0100), Sebastian Wiesinger wrote:

Just started with IPFIX export on two nodes this monday.

 2) In Douglas Hanks Juniper MX Series book it is noted that the
sampling rate for inline jflow must always be 1 (other rates are
not valid). Still it seems to work with rate 1000 for example (and
this is also used as example on the Juniper website).

Noticed this also, we're sending 'rate 500' and it is working, as Arbor is
showing correct traffic levels and it is multiplying by 500.

 3) The test collector is reporting missed flows. I'm not sure if that
is a problem with the collector or if I'm really missing flows.
Anyone else had this problem?

Configured IPFIX on two boxes (same peer, same hardware, same software,
same configuration, only difference is, working has unidirectional traffic,
non-working bidirectional). One box stop exporting after 1-2h. If I look at
'show jnh 0 sample-inline statistics' I only see 'Flow insert Policer
Drops' incrementing. If I reload PFE, it'll again work for some time.

Dunno yet if it is configuration problem or PR or what, bangalore JTAC is
on it, so I'm sure it'll be solved in no time at all .

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Fw: L2 Circuits accross domains

2012-11-20 Thread Krasimir Avramski
Hi,

Can you share the output of:

show route table inet.3  5.1.1.2(ASN1234)
show route table inet.3 5.1.1.1  (ASN 4321)
show ldp database l2circuit   --   on both PEs

Alex, imho redistributing ldp through egress-policy wouldn't stitch mpls
transport (assume LDP) b/w remote PEs(domains).
Using ldp peering b/w ASBRs (since IGP is redistributed) should be enough
since LDP downstream unsolicited with ordered control is default for J/C.

Best Regards,
Krasi

On Tue, Nov 20, 2012 at 1:07 PM, Peter Nyamukusa
peternyamuk...@yahoo.comwrote:

 Thanks Alex,

 I had already redistibuted all my loopback into my IGP and all were
 reachable


 
 | Kind Regards,  |
 | Peter Nyamukusa|
 | MCSE-2000/2003, CCNP, CCIP, CCDP, CCVP,  |
 | JNICIS-ent, JNCIS-er, JNCIS-Sec, JNCIA-Ex, Linux+, A+   |

 -


 
  From: Alex Arseniev alex.arsen...@gmail.com
 To: Peter Nyamukusa peternyamuk...@yahoo.com;
 juniper-nsp@puck.nether.net
 Sent: Tuesday, November 20, 2012 12:28 PM
 Subject: Re: [j-nsp] Fw: L2 Circuits accross domains

 You should have remote loopbacks also redistributed into LDP (if your
 transport label is from LDP).
 In JUNOS, this does not happen by default, you must have LDP egress-policy
 for this to occur. By default, LDP announces only primary lo0.0 IP@.
 Absent this, your L2circuits would show OL error (no outgoing label).
 Before you ask, this is totally different to CSCO IOS which announces all
 routes (bar BGP ones) as LDP FECs.
 HTH
 Rgds
 Alex

 - Original Message - From: Peter Nyamukusa 
 peternyamuk...@yahoo.com
 To: juniper-nsp@puck.nether.net
 Cc: pe...@nyamukusa.com
 Sent: Tuesday, November 20, 2012 7:46 AM
 Subject: [j-nsp] Fw: L2 Circuits accross domains


 - Forwarded Message -

 From: Peter Nyamukusa peternyamuk...@yahoo.com
 To: juniper-nsp@puck.nether.net juniper-nsp@puck.nether.net
 Sent: Tuesday, November 20, 2012 9:33 AM
 Subject: L2 Circuits accross domains


 Hi Folks,

 I have an exsisting L3 / L2 MPLS network with Cisco ASR on the Core as PE
 router and Junper routers as PE router, i have been running l2circuits
 sucessfully for some time now with out any problems using the below configs
 on my PEs, I am running IS-IS as my IGP and BGP



 [edit protocols l2circuit]
 peter@xxx-PE1# show

 }
 neighbor 41.x.x.1 {
 interface ge-0/0/2.2001 {
 virtual-circuit-id 2001;
 description XYZ L2;
 no-control-word;

 ignore-mtu-mismatch;

 [edit interfaces ge-0/0/2]
 peter@xxx-PE1# show
 description Customers L2 Circuits;
 vlan-tagging;
 encapsulation vlan-ccc;
 unit 2001 {
 description XYZ L2;
 encapsulation vlan-ccc;
 vlan-id 2001;
 }

 Neighbor: 41.x.x.2
 Interface Type St Time last up # Up trans
 ge-0/0/2.2001(vc 2001) rmt Up Nov 13 15:29:31 2012 1
 Remote PE: 41.x.x.2, Negotiated control-word: No
 Incoming label: 299776, Outgoing label: 299776
 Local interface: ge-0/0/2.2001, Status: Up, Encapsulation:
 VLAN
 Description: XYZ L2
 Neighbor: 41.x.x.x.1
 Interface Type St Time last up # Up trans
 ge-0/0/2.2101(vc 2101) rmt Up Nov 13 15:29:28 2012 1
 Remote PE: 41.x.x.1, Negotiated control-word: Yes (Null)
 Incoming label: 299792, Outgoing label: 333568
 Local interface: ge-0/0/2.2101, Status: Up, Encapsulation: VLAN
 Description: ABC L2 - ANY POP



 Now I am trying to extend these L2 circuits to anther MPLS Domain where we
 have direct Gigabit fibre connection I am using the same concept and
 establish ospf peering on the ASBR router with the remote ASN and
 redistributed my IGP so my loopbacks are seen by the PEs on both sides of
 the domains and establish ldp peering how ever the l2circuit is not coming
 up any help is appriciated as I have been working on this more than 24hrs
 and think that i am now a bit clouded


 peter@yyy-BR1# run show l2circuit connections (ASN 1234)
 Layer-2 Circuit Connections:

 Legend for connection status (St)
 EI -- encapsulation invalid NP -- interface h/w not present
 MM -- mtu mismatch Dn -- down
 EM -- encapsulation mismatch VC-Dn -- Virtual
 circuit Down
 CM -- control-word mismatch Up -- operational
 VM -- vlan id mismatch CF -- Call admission control failure
 OL -- no outgoing label IB -- TDM incompatible bitrate
 NC -- intf encaps not CCC/TCC TM -- TDM misconfiguration
 BK -- Backup Connection ST -- Standby Connection
 CB -- rcvd cell-bundle size bad XX -- unknown

 Legend for interface status
 Up -- operational
 Dn -- down
 Neighbor: 5.1.1.2
 Interface Type St Time last up # Up trans
 ge-0/0/0.2900(vc 2900) rmt OL

 peter@xxx-PE1# run show l2circuit connections (ASN4321)
 Layer-2 Circuit Connections:

 Legend for connection status (St)
 EI -- encapsulation invalid NP -- interface h/w not present
 MM -- mtu mismatch Dn -- down
 EM -- encapsulation mismatch VC-Dn -- Virtual circuit Down
 CM 

[j-nsp] maximum number of routing-instances on MX and EX

2012-11-20 Thread Chris Evans
Can anyone tell me the maximum number of routing instances supported on the
MX and EX platforms??

I am working on a technical review of vendors and am looking for that
information.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] maximum number of routing-instances on MX and EX

2012-11-20 Thread Darius Seroka
Chris,

On Junos I believe there is a limit of about 15 logical instances but I am
not aware of a limit of routing instances on the platforms. I would check
with a pre-sales team if they have those details as these figures depend on
hardware configuration/platform/junos versions.

Darius

On Tue, Nov 20, 2012 at 7:28 PM, Chris Evans chrisccnpsp...@gmail.comwrote:

 Can anyone tell me the maximum number of routing instances supported on the
 MX and EX platforms??

 I am working on a technical review of vendors and am looking for that
 information.
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] maximum number of routing-instances on MX and EX

2012-11-20 Thread Saku Ytti
On (2012-11-20 13:28 -0500), Chris Evans wrote:

 Can anyone tell me the maximum number of routing instances supported on the
 MX and EX platforms??
 
 I am working on a technical review of vendors and am looking for that
 information.

I noticed same question in c-nsp. Any straight answer you're going to
receive is going to be marketing. Since VRF by itself isn't inherently
expensive, it's not very different to have 100VRF with 1 route each, or
1VRF with 100 routes.
So before finding absolute VRF scaling limits, you're going to run out of
DRAM on control-plane for RIBs or TCAM/RLDRAM in HW for FIB for routes.

I think I booted MX80 with 8000 VRFs once in lab, took over hour to boot. I
seem to recall Juniper tests MX80 with 2000 instances.



-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] maximum number of routing-instances on MX and EX

2012-11-20 Thread Tarko Tikan

hey,

Just for future reference, this is not true for SRX running in flow mode.

On SRX, number of zones is software limited and interfaces for single 
zone have to be in the same routing-instance. So in essence you are 
limited to 127 routing instances (128 - global) on a box with 128 zones.


More realistically 63 routing instances (min. 2 zones per 
routing-instance) unless you only do global  R-I leaking.


--
tarko
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multihoming Using Juniper SRX 240

2012-11-20 Thread Pavel Lunin
- Load balancing all traffic between 2 ISP connections, not sure if its
 possible or not?

 - Send/Receive traffic of some subnets through one ISP and for others
 through other ISP to maximum utilize both ISP links



First thing I must say, in 100% of cases (I am assuming it's an enterprise
application because of the SRX beeng used as an ASBR) these two tasks
aren't really worth it for outgoing traffic. If you wish to balance
outgoing traffic, per-flow ECMP is enough almost everywhere. All those
this subnet goes here and that subnet goes there and consequent FBF's and
PBR's do nothing but increase unnecessary complexity and lengthen
troubleshooting time.

I bet your incoming traffic is at least 10 times greater than outcoming
(well, I might be wrong with 5% probability). What you might really need
is balancing incoming traffic. It means you should play with what you
advertise to influence how the world forwards traffic to you, not what you
forward to the world. Having a single /24 prefix you still have tools to do
so. Prepends and TE communities (if your ISPs support them) allow to
achieve quite a lot.

Also, check the JUNOS Enterprise routing book, it's all there.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp