[j-nsp] SRX/MX, GRE, BGP, VRF and indirect prefixes

2019-09-08 Thread Mike Williams
Hey all,

Am hoping I can get some pointers here, as this has me stumped.

I'm trying to provision an IPv6 prefix to a remote SRX.
So far I've got a GRE tunnel between the devices, with an IPv6 prefix on.
The tunnel is between the loopback of the MX (main table) and the interface 
facing interface of the SRX.
The MX end is in a virtual-router routing instance (routing-instances blah 
interface gr-1/0/0.4702), the SRX end in the appropriate security zone.
BGP between them works. Prefixes can be sent between them, and they appear in 
the relevant RIBs and FIBs.

However we can't actually route any traffic between them.
Nor can the prefix announced from the SRX be further announced from the MX to 
other MXes.
Which I think is related to the fact the prefixes are all indirect.


SRX;

 BGPPreference: 170/-101
Next hop type: Indirect, Next hop index: 0
Address: 0x19ade9c
Next-hop reference count: 1
Source: 2001:db8:85a3:4702::c000:220
Next hop type: Router, Next hop index: 0
Next hop: 2001:db8:85a3:4702::c000:220 via gr-0/0/0.4702, 
selected
Session Id: 0x0
Protocol next hop: 2001:db8:85a3:4702::c000:220
Indirect next hop: 0x199c000 - INH Session ID: 0x0
State: 
Inactive reason: Route Preference
Local AS: 65536 Peer AS: 65536
Age: 1:30:53Metric2: 0
Validation State: unverified
Task: BGP_65536.2001:db8:85a3:4702::c000:220
AS path: I
Accepted
Localpref: 100
Router ID: 192.0.2.32
Indirect next hops: 1
Protocol next hop: 2001:db8:85a3:4702::c000:220
Indirect next hop: 0x199c000 - INH Session ID: 0x0
Indirect path forwarding next hops: 1
Next hop type: Router
Next hop: 2001:db8:85a3:4702::c000:220 via 
gr-0/0/0.4702
Session Id: 0x0
2001:db8:85a3:4702::/64 Originating RIB: inet6.0
  Node path count: 1
  Forwarding nexthops: 1
Next hop type: Interface
Nexthop: via gr-0/0/0.4702



MX;

mikew@mcdpinrt1> show route 2001:db8:25f:2000::/52 exact extensive table 
blah.inet6.0
2001:db8:25f:2000::/52 (1 entry, 1 announced)
TSI:
KRT in-kernel 2001:db8:25f:2000::/52 -> {indirect(1048602)}
*BGPPreference: 170/-101
Next hop type: Indirect, Next hop index: 0
Address: 0x572b5274
Next-hop reference count: 2
Source: 2001:db8:85a3:4702::4702
Next hop type: Router, Next hop index: 1748
Next hop: 2001:db8:85a3:4702::4702 via gr-1/0/0.4702, selected
Session Id: 0x586
Protocol next hop: 2001:db8:85a3:4702::4702
Indirect next hop: 0x94c0ce00 1048602 INH Session ID: 0x587
State: 
Local AS: 65536 Peer AS: 65536
Age: 1:48:35Metric2: 0
Validation State: unverified
Task: BGP_65536.2001:db8:85a3:4702::4702+179
Announcement bits (2): 1-KRT 4-Resolve tree 4
AS path: I
Aggregator: 65536 10.249.105.32
Accepted
Localpref: 100
Router ID: 10.249.105.32
Indirect next hops: 1
Protocol next hop: 2001:db8:85a3:4702::4702
Indirect next hop: 0x94c0ce00 1048602 INH Session ID: 
0x587
Indirect path forwarding next hops: 1
Next hop type: Router
Next hop: 2001:db8:85a3:4702::4702 via 
gr-1/0/0.4702
Session Id: 0x586
2001:db8:85a3:4702::/64 Originating RIB: blah.inet6.0
  Node path count: 1
  Forwarding nexthops: 1
Next hop type: Interface
Nexthop: via gr-1/0/0.4702

mikew@mcdpinrt1> show route 2001:db8:85a3:4702::4702 extensive table blah

blah.inet6.0: 10 destinations, 10 routes (10 active, 0 holddown, 0 hidden)
2001:db8:85a3:4702::/64 (1 entry, 1 announced)
*Direct Preference: 0
Next hop type: Interface, Next hop index: 0
Address: 0x9659f44
Next-hop reference count: 2
Next hop: via gr-1/0/0.4702, selected
State: 
Local AS: 48447
Age: 2:41:08
Validation State: unverified
Task: IF
Announcement bits (2): 4-Resolve tree 4 

[j-nsp] EX4200/EX4550 VLAN translation

2018-07-06 Thread Mike Williams
Hey all,

So this is a new thing, to us at least.
We've got a need to transport some VLANs across a switch, without the 
intermediary seeing them, and we're confused.
I was hoping someone could show us the light.

We have 3 EX VCs
VC1 2xEX4200
VC2 2xEX4200+2xEX4550
VC3 2xEX4550
where VC1 is connected to the EX4200s of VC2, and the EX4550s of VC2 are 
connected to VC3.
VC2 to VC3 is already carrying tagged VLANs.
We need to get VLANs 128-137 from VC1 to VC3.
VC2 is already using VLANs in that range so we can't simply switch them 
across.

I know about .1q tunnelling, what I don't know is how to remove the S-VLAN.
pop isn't an option in "vlans  interface   mapping", and JUNOS 
doesn't want to accept swap on a trunk interface (VC2 to VC3 is a trunk port).

Help?


Thanks

-- 
Mike Williams


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper vMX

2017-01-25 Thread Mike Williams
Hi all,

We're just now looking at a similar requirement to Alex, and was wondering 
what the current thoughts are on vMX.

Perhaps 2 or 3 Gbps, several full BGP views, maybe quite a lot of flowspec 
routes (100s possibly), various filter-based-forwarding, a logical system or 
two, and under 15 gigE ports (we'd use KVM and PCI passthrough).

Junipers MX line lacks the routing-engine horsepower at the throughput scale 
we're expecting.
If the MX104 had better routing engines maybe things would be different.


Many thanks.


On Friday 09 September 2016 17:27:52 Alex Valo wrote:
> Dear All,
> 
> I am just wondering if anybody here is using Juniper vMX in production with
> success? We have a project with a very small budget. The first phase will
> be about 200 Mbps  (6 months), then 500 Mbps (6 to 12 months) and finally 3
> Gbps (12 months and beyond).
> 
> Our design:
> 
> - 2 routers with iBGP/OSPF between them
> - 4 eBGP sessions with 1 Gbps from upstreams with full view
> - 2 eBGP sessions with 1 Gbps downstream with full view
> - a few static route subnets
> - filtering using regular route maps and traffic engineering using BGP
> communities - syslog of all event
> - sFlow/NetFlow/IPFIX on the upstreams interfaces
> 
> Looking for some feedbacks:
> 
> Anybody keen to share their experience on either or both of these platforms?
> Anything to be specifically aware of?
> Which hardware do you use?
> What are you traffic level?
> Did it work? Is it bullet-proof?
> 
> Looking forward to your messages and feedbacks.
> 
> Alex
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Sending iBGP prefixes to another iBGP neighbour

2016-05-05 Thread Mike Williams
Wow, fast responses!

Simply setting "cluster <32bit thing>", turning the MX104 into a route 
reflector, sorted it right out.


Thanks all.

Mike WilliamsOn Thursday 05 May 2016 17:37:06  wrote:
> Hey all,
> 
> I could very well either be doing this completely wrong, or attempting to do
> the impossible, but...
> 
> We have BIRD on Linux using BGP to send prefixes to the MX104 over a direct
> connection, I need to send those prefixes to an MX80 directly connected to
> the 104.
> 
> At the 104 end of the 104<->80 peering there is just an export policy, that
> simply matches on "from protocol bgp" and the BGP community assigned the
> prefixes I want, then accept and next-hop self.
> 
> In isolation, the policy works.
> 
> > test policy blah /32
> 
> ...
> ...
> 
> Policy blah: 1 prefix accepted, 0 prefix rejected
> 
> 
> The MX104 never actually advertises any prefixes to the MX80 though.
> 
> > show route advertising-protocol bgp 
> 
> ... zilch ...
> 
> 
> Is there some inbuilt protection preventing iBGP prefixes from being sent to
> another iBGP neighbour?
> Or am I just doing it wrong?
> 
> advertise-peer-as and as-override have no impact.
> 
> 
> Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multi Core on JUNOS?

2015-11-30 Thread Mike Williams
On Saturday 03 October 2015 02:41:09 Olivier Benghozi wrote:
> I have heard that:
> 1) forget it about PowerPC CPUs (MX 80/104).

As we've got a couple MX104s on the bench waiting for some testing I decided 
to give this a try.
15.1F3, no SMP.

% sysctl hw.ncpu
hw.ncpu: 1
% 


It's also clear you really shouldn't be using this release, so who knows.

--- JUNOS 15.1F3.11 built 2015-10-27 19:44:29 UTC
At least one package installed on this device has limited support.
Run 'file show /etc/notices/unsupported.txt' for details.


-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Suggestions on management of dual-RE devices

2015-11-24 Thread Mike Williams
Hi all,

So we just got our first Juniper devices with dual-REs (if you exclude virtual 
chassis').
Before I get into actually configuring them, I'm wondering how others handle 
management, as I'm a touch confused.

Normally we just SSH/snmp to the loopback address, optionally jumping off from 
a device on the same OoB network if routing is down (yes, we should configure 
a backup router).

Juniper document providing each RE with it's own loopback address.
If you do that, you'd have to detect if what you're connected to is master or 
backup, right?
That might be a necessary trade off. As if you had a single loopback address, 
wouldn't the system SSH key change as loopback "moved" between the REs?
Can a 'global' single loopback even be configured?

Or do dual-RE devices actually work like virtual chassis, where the system SSH 
key is the same on all nodes, and connections to the backup are internally 
redirected to the master?


Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] iSCSI, fast write, slow read

2015-07-10 Thread Mike Williams
Hey all,

Firstly, this could very much possibly not be related to any Juniper equipment 
at all. If so, I apologise in advance.



So.
iSCSI.
4 servers.
Target has a 10Gbps Mellanox Connect-X3 Pro.
Three initiators have 1Gbps I350s.

Writing data, and only writing data, to all 3 initiators runs at ~2.5Gbps.
Exactly what I'd hope would happen.

However, reading data is pegged to ~1Gbps.
In fact, any reading of data at all pegs the throughput at ~1Gbps.
100Mbps reading, 900Mbps writing. 500Mbps reading, 500Mbps writing.

Not sure if this attachment will make it through, but attached is a graph 
illustrating.
Before 11:20 was a bonnie++ run reading and writing at the same time.
After that was ~80 minutes of pure writing.
Finally pure reading.


In the middle is an EX3300.

Hardware inventory:
Item Version  Part number  Serial number Description
Chassisx  EX3300-48T
Routing Engine 0 REV 14   750-034247   x  EX3300 48-Port
FPC 0REV 14   750-034247   x  EX3300 48-Port
  CPU BUILTIN  BUILTIN   FPC CPU
  PIC 0   BUILTIN  BUILTIN   48x 10/100/1000 Base-T
  PIC 1  REV 14   750-034247   x  4x GE/XE SFP+
Xcvr 0   REV 01   740-021308   x   SFP+-10G-SR
Xcvr 1   REV 01   740-021308   x   SFP+-10G-SR
Power Supply 0   PS 100W AC
Fan Tray Fan Tray

Physical interface: xe-0/1/0
Laser bias current:  7.904 mA
Laser output power:  0.5820 mW / -2.35 dBm
Module temperature:  41 degrees C / 106 degrees F
Module voltage:  3.3420 V
Receiver signal average optical power :  0.1781 mW / -7.49 dBm


Juniper branded optics both ends, single-mode fiber connecting them.

It has almost entirely no config at all.
12.3R6.6, and a 9216 MTU configured on all ports as the only config change.


Now, the reason I'm writing here is I suspect the EX.

Reading across a back-to-back 10Gbps connection (same Connect-X3 Pros) goes at 
the sort of lick you'd expect for a disk subsystem that can sustain 2Gbps of 
writes.


Am I seeing some sort of buffering issue? When a 10Gbps machine sends traffic 
to 1Gbps machines.
Or maybe a physical limitation of the 3300?


Closest thing I could find was a discussion about Cisco switches.
https://supportforums.cisco.com/discussion/12483511/performance-issue-10gbs-1gbs-6880-x6800-ia-1521sy0a
I don't see any drops logged, and don't understand Cisco config.


Thanks for any advice!

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Thoughts on MX80 v MX104 RE performance

2015-04-23 Thread Mike Williams
I'm only seeing 64-bit Junos available for MX240 and up
And they already have hugely powerful x86 REs as options.


On Thursday 23 April 2015 12:54:34 Eduardo Schoedler wrote:
 There is a 64-bit version of junos, someone already tested?
 
 Regards,
 
 2015-04-23 12:49 GMT-03:00 Raphael Mazelier r...@futomaki.net:
  Le 23/04/15 15:13, Saku Ytti a écrit :
  Yeah at least 10k has XEON. I don't really understand why vendors use
  PPC, is
  it mainly motivated by BOM or pincount/thermal or some other issues?
  
  Yes on cheap boxes, ppc processor still have a good power/price ratio.
  The real problem is the performance of junos on ppc.
  It can be acceptable on cheap switch (like the EX series), but not on MX.
  
   But yeah, JunOS relies on terrific single thread performance, but unsure
   
  how
  long they can get away with it.
  
  Juniper is aware of this problem, and work on this. Junos 15 should have
  limited smp capabilities (afaik rpd will still not be multi-thread).
  
   For some reason, IOS with equivalent CPU is much faster to converge.
  
  Cisco have made the effort to rewrite their os (and not only once, ios15,
  ios-xr, nxos, etc...). Juniper should make the same, and rewrite junos
  from
  scratch in my opinion, and I think the monolithic design of rpd should be
  rethinked.
  
  
  Regards,
  
  --
  Raphael Mazelier
  
  
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] DHCPv6 routing instance to reach server, or fooling flow sessions with firewall filters

2015-04-21 Thread Mike Williams
Hey all,


Got a problem here I'm hoping someone can help with.

Client - M-series (relay) - (VLAN 2920) J-series (VLAN 100) - Server 
Server - (VLAN 100) J-series (VLAN 2980) 

inet6.0 on the M knows to reach the DHCPv6 server via VLAN 2920, the J-series 
dutifully forwards the packets to the server and receives the response.
However, as the relayed request comes from the IP on the Client side of the M, 
the J-series wants to route the answer via VLAN 2980 (because it has a /56 
route that way for all the client networks).

If I add a /128 static route to the M via 2920 DHCPv6 works as expected.
That's not going to scale for even half a dozen networks, let alone 10s or 
more.


The M has a routing instance (type forwarding) that would use VLAN 2980 to 
reach the Server, but I haven't found a knob to make the dhcp-relay use a 
routing instance to reach the server.

I've tried making a routing instance on the J-series (type virtual-router) 
with a default route via VLAN 2920, and using a firewall filter to put DHCPv6 
packets into it.
term dhcpv6 {
from {
source-address {
::/0;
}
next-header udp;
source-port [ 547 546 ];
}
then {
count stateless-dhcpv6;
log;
routing-instance stateless;
}
}
Seems the flow lookup doesn't respect that.


Does anyone have any ideas?
Thanks


-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Thoughts on MX80 v MX104 RE performance

2015-04-20 Thread Mike Williams
Hey all,

There was a discussion May last year about the MX104 and BGP performance.
With the take away being that the MX104 RE is still pretty weak, at least 
compared to the modern x86 REs fitted to some of the bigger models.
The RE-B-1800x1 in an M7i is certainly night and day faster than an MX80!


On Friday 16 May 2014 22:04:05 Saku Ytti wrote:
...
 All MX, T, M linecards use Freescale PQ3 family processors. MX80
 control-plane as well.
 Freescale is phasing out PQ3 and MX104 uses QorIQ in control-plane and in
 'linecard'.
 
 Exact model for MX80 is 8572 and MX104 is P5021
...


MX104 RE is 500Mhz faster and a newer CPU architecture, so  potentially 
 50% faster? 60% faster? 100% faster?


Does anyone out there have any experience of the relative performance of the 
MX104 RE over the MX80 RE?
RE-B-1800x1 performance would be very nice, but not exactly probable.


Our usage (multiple full BGP views, into multiple tables with altered 
preferences) puts an awful strain on the MX80 RE each time a policy change is 
made, or peers flap. Not end of the world bad, but pretty bad.



And relatedly, has anyone heard any recent rumours around when Junos might 
take advantage of the second CPU?
From the Freescale docs both CPUs are dual-core.


Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] configuration archival, commit comments

2014-03-31 Thread Mike Williams
I managed to find an hour or so today to look at this further.
A commit script is the result. My very first!


match configuration {
var $comment = $junos-context/commit-context/commit-comment;
if( jcs:empty ($comment)) {
xnm:error {
message No comment specified;
}
}
var $version = version;
change {
junos:comment Automatic commit annotation;  _ $comment;
version $version;
}
}


Junos would just complain warning: statement has no contents; ignored when I 
tried adding just a comment/annotation, hence the 'change' to version.
version is the very first line of the config, so the annotation ends up right 
at the top.


Separately we commit every archived config to a git repo so we end up with 
stuff like this;

$ git diff r1..r2
...
@@ -1,5 +1,5 @@
-## Last changed: 2014-03-31 13:46:45 UTC
-/* Automatic commit annotation; remove old annotation */
+## Last changed: 2014-03-31 13:50:03 UTC
+/* Automatic commit annotation; test annotation */
...
...


Mike WilliamsOn Thursday 20 March 2014 13:50:21  wrote:
 Hi all,
 
 Random thought for the day.
 
 You can archive the entire config after each commit (archival configuration
 transfer-on-commit).
 You can apply a comment to each commit (# commit comment blah)
 
 How do you archive that comment?
 
 It's not included in the config.
 
 
 Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] configuration archival, commit comments

2014-03-20 Thread Mike Williams
Hi all,

Random thought for the day.

You can archive the entire config after each commit (archival configuration 
transfer-on-commit).
You can apply a comment to each commit (# commit comment blah)

How do you archive that comment?

It's not included in the config.


Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ex switch VCP cabel

2014-02-10 Thread Mike Williams
On Monday 10 February 2014 09:44:51 Yucong Sun wrote:
 Hi,
 
 VCP cable for EX switch looks a lot like a plain SFF-8088 cable, can
 someone confirm?  SFF-8088 cable is sold $10 on ebay, while the VCP
 cable is at least $100...

VCP cables are, at least for the EX4200, PCIe x8 not SAS.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] batch on junos ?

2014-01-14 Thread Mike Williams
On Tuesday 14 January 2014 10:28:43 R S wrote:
 Is there a way to run a sort of .bat on SRX junos ?
 
 I mean, to run a single command from cli to do some actions (set xxx/ set
 yyy/ commit check / commit) ?
 
 This is useful to be runned by NOC for scheduled action every day.
 
 Tks

In a bash shell you can do this;

$ (cat  EOF
configure
set xxx
set yyy
commit check
commit
EOF
) | ssh -T router


Or put your configure, set, delete, replace, etc, commit into a file and;

cat file | ssh -T router

I do this to batch change passwords and the like.


The -T just suppresses the Pseudo-terminal will not be ... warning from SSH.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] J-series, hoping packets between routing-instances

2013-11-07 Thread Mike Williams
Hi all,

I might have painted myself into a corner here, so I'm here looking for 
options from people far cleverer than I.

Firstly, a bit of history.

We're using J6350s, and SRX650s, as security devices on a stick.
Our Ms and MXs punt packets into a routing instance on the security devices 
with firewall filters. Those routing instances purposely only use the most 
basic of static routes possible (10/8, 192.168/16, etc), so we can be certain 
what zones packets pass through so the policies match.

That all works fine.


We're also centralising our inter-site IPSec onto the Js and SRXs, but need 
OSPF there, so have a second routing-instance and a partial mesh of routed 
tunnels between them.
Still, all good.
Offices and what-not have tunnels tied directly to the IPSec routing-instances 
and OSPF metrics keep traffic flows sane.
All hunky dory.



Now the problem.

I need to take traffic from servers behind an M/MX have it policy'd by the 
security routing instance, then encrypted by the IPSec routing-instance.
If I punt traffic into security, let it come back to the router, then punt 
it back into ipsec, everything works as expected.
However each packet has to pass across the M/MX-J/SRX link 4 times, in out, 
in out. Shake it all about.

Obviously this would be better if we could shortcut the M/MX step in the 
middle and move packets from security to ipsec, and ipsec to security 
directly.

As security doesn't run OSPF/BGP/ISIS/etc adding a static route next-table 
ipsec.inet.0 is fine.
ipsec *does* run OSPF though, so I need to do FBF to override that. I've 
tried a then routing-instance security filter applied on output on the 
interface facing the M/MX, but my traffic get lost somewhere. Security 
policies from 'input-ipsec-zone' to 'output-security-zone' were added.


I'm wondering if 'moving' packets from routing-instance to routing-instance on 
a flow-mode device simply screws up security policies. As one of the input or 
output interface don't exist in the routing-instance.
So I figured *routing* packets from routing-instance to routing-instance would 
be better. Time for some logical tunnels! J-series devices don't support 
logical tunnels though.

Argh!

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] family inet6 on st0.x

2013-08-05 Thread Mike Williams
Hey all,

Am I being dense, or now that 'family inet6' can be configured on an st0.x 
interface, does it not actually work?


I've configured the following on a pair of J6350 clusters;

set interfaces st0 unit 634 description rmdcccjs-dwdcccjs
set interfaces st0 unit 634 family inet mtu 1500
set interfaces st0 unit 634 family inet address 10.xxx.xxx.135/31
set interfaces st0 unit 634 family inet6 mtu 1500
set interfaces st0 unit 634 family inet6 address 2a02::87/64
set security ike gateway rmdcccjs-dwdcccjs ike-policy tunnel-pol
set security ike gateway rmdcccjs-dwdcccjs address 178.xxx.xxx.251
set security ike gateway rmdcccjs-dwdcccjs external-interface reth1.500
set security ike gateway rmdcccjs-dwdcccjs version v2-only
set security ipsec vpn rmdcccjs-dwdcccjs bind-interface st0.634
set security ipsec vpn rmdcccjs-dwdcccjs ike gateway rmdcccjs-dwdcccjs
set security ipsec vpn rmdcccjs-dwdcccjs ike proxy-identity local 
10.xxx.xxx.135/31
set security ipsec vpn rmdcccjs-dwdcccjs ike proxy-identity remote 
10.xxx.xxx.134/31
set security ipsec vpn rmdcccjs-dwdcccjs ike proxy-identity service any
set security ipsec vpn rmdcccjs-dwdcccjs ike ipsec-policy tunnel-pol
set security ipsec vpn rmdcccjs-dwdcccjs establish-tunnels immediately
set security zones security-zone ipsec_vpn interfaces st0.634
set routing-instances ipsec interface st0.634
set routing-instances ipsec protocols ospf area 0.0.0.0 interface st0.634
set routing-instances ipsec protocols ospf3 area 0.0.0.0 interface st0.634


Where 10.xxx.xxx.134/31 and 2a02::87/64 are appropriately swapped/changed at 
the other end.
The devices are entirely flow-mode (security forwarding-options family inet6 
mode flow-based).
One cluster is 12.1X45-D10, the other 12.1X44-D15.5.
The MTU between the devices is at least 1800 bytes all the way through.
reth1.500 is also in the ipsec_vpn zone, and all intra-zone traffic is 
permitted.
I've even had host-inbound-traffic set to all all.


IPv4 works fine, but IPv6 just, well, doesn't.

Can't ping the link-local or global addresses across the tunnel, OSPF3 hellos 
are being being sent but not received.
'monitor traffic interface st0.634' says OSPFv2 hellos are coming In and Out, 
and unknown protocol (0x006c) is going Out only.


Pretty much the only documentation I can find is for IPSec over IPv6 (as in, v6 
gateway addresses).
Nowt about configuring IPv6 on the tunnel interface.


I don't mind if anyone does prove I'm being dense!

Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Clustering J-series across a switch

2013-04-05 Thread Mike Williams
On Tuesday 02 April 2013 17:47:08 Mike Williams wrote:
 I accept that clustering across a switch isn't necessarily advisable, I'm
 just  wondering if it's fundamentally possible.
 Has anyone ever even tried to put a switch between a J-series, or
 SRX-series,  cluster?

Thanks very much to all those who responded.
I've now got a J-series cluster across our EX VC ring.

It really was as simple as putting each pair of ports into it's own VLAN with 
access ports. The special ether-type packets Just Work.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Clustering J-series across a switch

2013-04-02 Thread Mike Williams
Hey all,

So I've been reading the clustering docs, and they make it pretty clear that 
the (at least) control link should connect the devices back-to-back.
I don't have the page to hand but there is an option to configure the control 
link in the old way, using (a?) VLAN (4094 IIRC), otherwise new clusters will 
use a special ether-type.

Now if Junos is going to use a new ether-type for control link communication 
it's pretty certain the devices would have to be connected back-to-back, but 
if control link traffic is within a specific VLAN switching it shouldn't be a 
problem, right? I'd q-in-q the traffic anyway.

The health of the control and fabric links is determined by heartbeats only, 
not link state, so a switch wouldn't hurt that.

I accept that clustering across a switch isn't necessarily advisable, I'm just 
wondering if it's fundamentally possible.
Has anyone ever even tried to put a switch between a J-series, or SRX-series, 
cluster?

Thanks


Currently we've 2 J6350s on different floors of a building, with different 
providers. Around that building we have a 10Gbps VC ring of EX3300s. We want 
to cluster the J-series' but don't want the hassle or cost of running copper 
between the providers (if that's even possible) when the VC is way more than 
fast enough.
Traffic levels are way way below 10Gbps, and it's highly unlikely they'll ever 
get that high.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LACP reliability?

2012-12-21 Thread Mike Williams
On Thursday 20 December 2012 14:56:59 Morgan McLean wrote:
 Hi,
 
 I was just curious if anybody had feedback regarding LACP reliability when
 a system is under load etc. Wondering if its common for a box to come under
 load, stop sending LACP packets at their expected intervals and get dropped
 by the upstream switch etc.

We were hit by this on a pair of M7is, after a flap within an upstream caused 
every route to change.
The poor little RE400 couldn't keep up with both jobs.
They did have 3 full tables to deal with. 2 upstreams, and iBGP between.

Never had a Linux machine do this however.
But they're all at least an order of magnitude faster.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX doing IPv6 on DSL

2012-12-10 Thread Mike Williams
SRX can't do it, yet.

http://forums.juniper.net/t5/SRX-Services-Gateway/Branch-SRX-as-a-DHCPv6-prefix-delegation-client/m-p/158172#M20307

On Tuesday 11 December 2012 00:17:44 Julien Goodwin wrote:
 (Thunderbird crashed taking away my first response)
 
 Skeeve's post is spurred by a post of mine to Ausnog earlier today
 looking for a new reliable home ADSL CPE.
 
 In fact although I can now set family inet6 on a PPPoE interface, I
 can't do something similar to family inet negotiate-address which
 makes it useless for consumer circuits, even if I could avoid the need
 for DHCP-PD (previously my ISP required DHCP-PD before they'd route a
 static block, this may have changed).
 
 The fact that I can't even do SLAAC on an Ethernet port means it's also
 not usable if I was on FTTH.
 
 On 10/12/12 23:26, Skeeve Stevens wrote:
  Hey all,
  
  Does anyone know is the SRX110 is capable of doing DHCP-PD or 6RD yet?
  
  If not, does anyone know of a X release or when it may hit mainline?
  
  IPv6 is starting to get popular with engineers and at the moment all they
  seem to be able to use are Cisco 877/887 and ISR's with DSL WIC cards.
  
  Surely Juniper has some plans afoot?
  
  ...Skeeve
  *
  
  *
  *Skeeve Stevens, CEO - *eintellego Pty Ltd
  ske...@eintellego.net ; www.eintellego.net
  
  Phone: 1300 753 383; Cell +61 (0)414 753 383 ; skype://skeeve
  
  facebook.com/eintellego ;  http://twitter.com/networkceoau
  linkedin.com/in/skeeve
  
  twitter.com/networkceoau ; blog: www.network-ceo.net
  
  The Experts Who The Experts Call
  Juniper - Cisco – IBM - Brocade - Cloud
  -
  Check out our Juniper promotion website!  eintellego.mx
  Free Apple products during this promotion!!!
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
-- 
Mike Williams

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX100 for dual 100M uplink routing network in packet mode.

2012-11-28 Thread Mike Williams
On Tuesday 27 November 2012 23:08:04 Michel de Nostredame wrote:
 PS: I just got a SRX100 and am going to do some POC with
 selective-packet-mode. Basically I want to route my traffic into GRE
 tunnel in packet-mode and route GRE packet over IPsec to remote SSG
 site in flow-mode because IPsec needs flow module. Hopefully this can
 suppress my session-table usage to only one for two records. I hate
 flow-mode JUNOS for a long long long time since J-series, but the SRX
 prices are simply irresistible.

Michel,
We wanted to do that with some SRX650s.
Doesn't work. Sorry.

Seems like some flag is on the packet saying it's packet-mode, which isn't 
removed/reset when it's wrapped in a GRE header, so IPSec sees a packet-mode 
packet and drops it.

This was with 10.4R6.5, we didn't get the chance to try anything newer.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Adjusting OSPF metric based on VRRP state?

2012-10-01 Thread Mike Williams
Thanks for all the responses.

Clustering does indeed seem to be by far the best solution.
I think I'll take a crack at an event script anyway, as I haven't touched them 
before, and any knowledge would probably be useful eventually.

On Thursday 27 September 2012 17:30:42 Mike Williams wrote:
 Hey all,
 
 I've been poking around for a while now but haven't been able to find
 anything, which does pretty much suggest this isn't possible.
 
 So, I've got 2 J6350s in full flow-mode guise on 11.4, but not a cluster.
 I am trying to use VRRP for some HA though.
 Because they're both on the same network segment they both announce that
 prefix into OSPF, and that's causing me a problem.
 If a TCP session arrives via J1 and J2 is the VRRP master, J2 will
 drop/reject the SYN-ACK as it didn't deal with the SYN.
 
 Now I know I could set flow tcp-session no-syn-check to effectively ignore
 the problem, or given suitable amounts of interest/time/effort we could
 probably cluster the 2 devices (different Colo providers in the same
 building), or even use some creativity with static routes (urgh) to bypass
 OSPF entirely, but I'm hoping there is some magic OSPF/VRRP knob I haven't
 been able to find yet that will alter the OSPF metric for a logical
 interface based on the VRRP state.
 
 Does anyone know if such a knob exists?
 
 Honestly I'm not holding out much hope, as there isn't a direct corrolation
 between VRRP and logical interfaces (many VRRP groups per unit).
 
 
 Thanks
-- 
Mike Williams
Leading Systems Administrator - Infrastructure Support Group
Comodo CA Ltd
Office Tel Europe: +44 (0) 161 8747070
Fax Europe: +44 (0) 161 8771767___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] Adjusting OSPF metric based on VRRP state?

2012-09-27 Thread Mike Williams
Hey all,

I've been poking around for a while now but haven't been able to find 
anything, which does pretty much suggest this isn't possible.

So, I've got 2 J6350s in full flow-mode guise on 11.4, but not a cluster.
I am trying to use VRRP for some HA though.
Because they're both on the same network segment they both announce that 
prefix into OSPF, and that's causing me a problem.
If a TCP session arrives via J1 and J2 is the VRRP master, J2 will drop/reject 
the SYN-ACK as it didn't deal with the SYN.

Now I know I could set flow tcp-session no-syn-check to effectively ignore 
the problem, or given suitable amounts of interest/time/effort we could 
probably cluster the 2 devices (different Colo providers in the same 
building), or even use some creativity with static routes (urgh) to bypass 
OSPF entirely, but I'm hoping there is some magic OSPF/VRRP knob I haven't 
been able to find yet that will alter the OSPF metric for a logical interface 
based on the VRRP state.

Does anyone know if such a knob exists?

Honestly I'm not holding out much hope, as there isn't a direct corrolation 
between VRRP and logical interfaces (many VRRP groups per unit).


Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] duplicate acks, EX3300 VC

2012-05-17 Thread Mike Williams
Hey all,

Before I punt this to JTAC, has anyone had any experience with 
poor/highly-variable TCP throughput from a small stack of EX3300s?

We've got a stack of 3, one 48 port, and two 24 ports, and since they went in 
we can't get reliable TCP transfers transatlantic.
Linux-Linux can go really fast, but involve Windows and we get a pityful 
~100KBps, regardless of tuning done.
Junos is 11.4R2.14.

It's taken us *forever* to hone in on the issue possible being the EXs, 
because who'd have thought a switch couldn't handle packets at a few 10s of 
megabytes per second (10-20k PPS x 3).

To cut a long story short;
internetsrx650ex3300linux firewallsame ex3300server
Linux firewall sees the 2 initial TCP packets correctly, but the server 
generally only gets the second one, or if it gets the first it's after the 
second. Then we're into a bazillion duplicate acks, out-of-order packets, and 
TCP retransmissions.

I found the 'show system statistics tcp' command a short while ago and it's, 
well, interesting.


 show system statistics tcp
fpc0:
--
Tcp:
 84769061 packets sent
 16676437 data packets (2039615568 bytes)
 1416 data packets retransmitted (1526176 bytes)
 0 resends initiated by MTU discovery
 67141526 ack only packets (23539653 packets delayed)
 0 URG only packets
 0 window probe packets
 22 window update packets
 3468634 control packets
 125994683 packets received
 15916504 acks(for 2039560634 bytes)
 82630576 duplicate acks
 0 acks for unsent data
 25574925  packets received in-sequence(3702132560 bytes)
 43149892 completely duplicate packets(5824 bytes)
 10 old duplicate packets
 5 packets with some duplicate data(2140 bytes duped)
 0 out-of-order packets(0 bytes)
 0 packets of data after window(0 bytes)
 0 window probes
 24585 window update packets
 23 packets received after close
 0 discarded for bad checksums
 0 discarded for bad header offset fields
 0 discarded because packet too short


fpc1 and fpc2 have similar numbers, even though these packets have no need to 
leave fpc0. There aren't even any active servers off fpc1/2 yet.
fpc0 has been up 33 days, so has seen almost 30 duplicate acks per second 
since it booted.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX-UM-2X4SFP- 2-port 10G SFP+ / 4-port 1G SFP Uplink Module

2012-04-12 Thread Mike Williams
On Tuesday 21 February 2012 14:44:19 Timh Bergström wrote:
  Hope nobody minds me butting in here, but this brings up a related
  question for me.
 
  The built in uplink ports in the EX3300. Do they support running 2 at
  10Gb (for VC) and 2 at 1Gb for regular ethernet?
  I'm sure I've seen it written that all four ports can be used at 10Gb, if
  true that would support my belief mixed mode operation is supported too.

 Afaik two of the four 10Gb ports are pre-configured for VC, the other
 two can be used for ethernet out of the box, or you can use one for VC
 and three for ethernet or the other way round, no problems (at least
 that's what the juniper SE told me when I bought mine).

We got our stack of EX3300s yesterday.
I can confirm that (with 11.4R2.14 at least) you can do mixed 1/10GbE on the 
EX3300.
I've got 3 in a VC ring of 10Gb using ports 2 and 3, 10GbE in port 0, and 1GbE 
in port 1. Didn't have to do anything special either, although I did tell the 
VC to use ports 2 and 3 which was probably unnecessary.
1GbE SFPs cause a ge-x/1/x interface to appear, and 10GbE SFPs cause an 
xe-x/1/x interface to appear.

-- 
Mike Williams
Senior Infrastructure Architect
Comodo CA Ltd
Office Tel Europe: +44 (0) 161 8747070
Fax Europe: +44 (0) 161 8771767

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] JunOS 10.4R8.5 on MX5? Am I forced to run 11.4+?

2012-03-23 Thread Mike Williams
On Thursday 22 March 2012 13:00:57 Timh Bergström wrote:
 I'm going to try to install 11.2R5.4 (recommended release) on it now,
 I do not trust a R1-release. Ever. ;-)

FYI, I've just noticed 11.4R2.14 appeared yesterday (22nd).
Only been 4 months since the R1 release...

Typically I downloaded and upgraded an SRX220 to 11.2 yesterday (evening GMT) 
for better IPv6 support, as new no 11.4 release had showed up yet.

-- 
Mike Williams

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Rack mounting a EX4200-48PX, concerned about weight

2012-03-21 Thread Mike Williams
On a few occasions I've used those plastic washers you get with cage nut sets 
between the post and the ear on the top holes, packing the top out a few 
millimeters to take out some of the sag. It can be tricky to hold the washer 
in place while putting the screw through.
You could also just put the top screws in *then* mount the switch to achieve 
the same result, but with only 2 screws holding the device in.

On Tuesday 20 March 2012 23:18:21 Bill Blackford wrote:
 I actually mounted one of these in an older cabinet where the cage
 nuts were not the tightest fit, but (and I do not advocate this)
 taking a laptop bag strap along the back of the chassis and up to
 points on the back of the cabinet to act as a sling to help hold up
 the back.

 Four post rails would probably be the best solution.

 -b

 On Tue, Mar 20, 2012 at 3:19 PM, James Baker ja...@jgbaker.co.nz wrote:
  I've got a couple of new EX4200-48PX with dual 930W power supply which
  have just arrived and I'm quite concerned about the weight of the units
  in relation to the rack ears. It is the same ears for the EX4200/3200
  family.
 
  Has anyone racked these before, if so how much sag do you get and do you
  suggest a shelve underneath?
 
  I've seen what a Cisco2811 does and how much it sags and this will be a
  lot worse.
 
  Thanks
 
  James
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Supported REs for M7i

2012-03-19 Thread Mike Williams
Hey Phil,

I can't confirm if the RE-1800 is shipping yet or not, or what compatibility 
issues might arise, but we've had a quote for them.
If you buy a new M7i (M7iBASE-AC-1GE) for $25k Juniper will replace the stock 
RE with the RE-1800 (RE-B-1800X1-4G-BB) for $5k.
To buy just the RE-1800 (RE-B-1800X1-4G-R) it's $20k.

Those are list prices (which if you search are available online, in case 
anyone from Juniper disapproves of me saying!).



RE-B-1800X1-4G-BB
Routing Engine with 1.73GHz processor, 4GB DDR3 memory with ECC, 64
GB SSD and 4GB compact flash

On Friday 16 March 2012 17:49:06 Phil Mayers wrote:
 All,

 We have a pair of M7i which in a previous life served as our border
 routers.

 Since we moved to 10gig, they've served primarily as PIM RPs since they
 have the tunnel services PIC.

 We're taking the opportunity to re-architect our network, and I want to
 use them as route-reflectors. They would not be forwarding traffic.

 They currently have RE-400-256, which have been (ahem) locally
 upgraded to 768Mb RAM and 1Gb flash.

 I see the original RE-400-256 is now well past EoS, but can find no
 mention of the RE-400-768.

 I also note the RE-850 is now EoL.

 I see mention on Juniper's website of a version of the RE-1800 for the
 M7i, which doesn't seem to be available yet. Does anyone know the likely
 price?

 Should I be looking to go to RE-1800 when it appears? Does anyone know
 the likely cost? Are there any compatibility issues e.g. you need
 CFEB-super-duper or a particularly new version of the chassis?

 Cheers,
 Phil
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX-UM-2X4SFP- 2-port 10G SFP+ / 4-port 1G SFP Uplink Module

2012-02-27 Thread Mike Williams
That is certainly the way it is for the 3200 and 4200
http://www.juniper.net/techpubs/en_US/release-independent/junos/topics/task/configuration/uplink-module-ex3200-ex4200-sfp-plus-mode-setting-cli.html

However it seems the 3300 is a different beast, or at least that's what I 
hope!
From the datasheet on 
http://www.juniper.net/us/en/products-services/switching/ex-series/ex3300/#literature


Uplink
• Fixed 4-port uplinks which can be individually configured as GbE
  (SFP) or 10GbE (SFP+) ports.


I've yet to find documentation detailing exactly how you go about that though.

On Monday 27 February 2012 14:25:12 Nick Kritsky wrote:
 As far as I remember you have to explicitly select 10g or 1g mode on PIC
 level for EX uplink module. This automatically rules out any mixed mode
 setup.

 NK

 2012/2/21 Timh Bergström timh.bergst...@videoplaza.com

  On Tue, Feb 21, 2012 at 12:03 PM, Mike Williams
 
  mike.willi...@comodo.com wrote:
   On Tuesday 21 February 2012 08:33:53 Jeff Wheeler wrote:
  
   The built in uplink ports in the EX3300. Do they support running 2 at
 
  10Gb
 
   (for VC) and 2 at 1Gb for regular ethernet?
   I'm sure I've seen it written that all four ports can be used at 10Gb,
 
  if true
 
   that would support my belief mixed mode operation is supported too.
 
  Afaik two of the four 10Gb ports are pre-configured for VC, the other
  two can be used for ethernet out of the box, or you can use one for VC
  and three for ethernet or the other way round, no problems (at least
  that's what the juniper SE told me when I bought mine).



-- 
Mike Williams
Senior Infrastructure Architect
Comodo CA Ltd
Office Tel Europe: +44 (0) 161 8747070
Fax Europe: +44 (0) 161 8771767

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] EX-UM-2X4SFP- 2-port 10G SFP+ / 4-port 1G SFP Uplink Module

2012-02-21 Thread Mike Williams
On Tuesday 21 February 2012 08:33:53 Jeff Wheeler wrote:
 On Tue, Feb 21, 2012 at 3:05 AM, Skeeve Stevens

 skeeve+juniper...@eintellego.net wrote:
  The common thought was that you could use EITHER the 2 x 10Gb OR the 4 x
  1Gb.

 You are correct.  It will not operate in a mixed mode.

Hope nobody minds me butting in here, but this brings up a related question 
for me.

The built in uplink ports in the EX3300. Do they support running 2 at 10Gb 
(for VC) and 2 at 1Gb for regular ethernet?
I'm sure I've seen it written that all four ports can be used at 10Gb, if true 
that would support my belief mixed mode operation is supported too.

Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Sources for SFP+ optics

2012-02-21 Thread Mike Williams
Hey all,

While I'm thinking about 10Gb in EX3300s.
Does anyone have a reliable source for 10Gb SFP+s suitable for VC use in EXs?
US or Europe doesn't really matter, but US would be easier.

We should be in the market for 20 or so shortly, to connect 4 bunches of 3300s 
into VCs.
Only 4 would be driving cable lengths anywhere near 200 meters, but all would 
be on SMF for consistancy, if that matters at all.

Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Practical VPLS examples (SRX and J series)

2011-11-16 Thread Mike Williams
On Friday 11 November 2011 17:42:29 Mike Williams wrote:
 So. VPLS. Point-to-multiple-point. Virtual LAN. Brilliant!

I managed build up the courage, and time, to have a crack at this today, 
figuring it could take while.
However it took me less than an hour to convert my mesh of l2vpns to a VPLS 
instance.


David, your from memory example was almost exactly perfect.
JUNOS gives you 2 options for the VPLS encapsulation (ethernet and 
ethernet-vpls, as you suggested), however neither is valid!
Specifying no encapsulation works fine though.

[edit routing-instances VPLS_vr protocols vpls encapsulation-type]
  'encapsulation-type ethernet'
Encapsulation type not valid for vpls
error: configuration check-out failed


Thanks everyone.



# show routing-instances VPLS_vr
instance-type vpls;
interface lt-0/0/0.5501;
route-distinguisher 500:5501;
vrf-target target:500:500;
protocols {
vpls {
site-range 15;
no-tunnel-services;
site rmdcjs1 {
site-identifier 1;
interface lt-0/0/0.5501;
}
}
}

# show interfaces lt-0/0/0
unit 501 {
encapsulation ethernet;
peer-unit 5501;
family inet {
address 10.250.250.1/27;
}
}
unit 5501 {
encapsulation ethernet-vpls;
peer-unit 501;
}

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Practical VPLS examples (SRX and J series)

2011-11-11 Thread Mike Williams
So today I created a mesh of L2VPNs interconnecting virtual-routers on 5 
SRX650s and J6350s.
I did the 3 650s as a trial, then added 2 J6350s later because, well, I could.
Configuring a triangle of RSVP-signalled paths, BGP neighbours, and logical 
tunnels, wasn't too bad. Adding 2 more points made it almost maddeningly 
confusing.
We'll be adding more sites sooner-or-later too, and my brain is unlikely to 
cope with anymore sites increasing the mesh exponentially.

So. VPLS. Point-to-multiple-point. Virtual LAN. Brilliant!
I haven't yet found any documentation that I can actually understand though.
Note: The site range value must be greater than the largest site identifier. 
is especially confusing. Range is one number, bigger than any other, hmm.

Could some kind gentle person provide a practical example of VPLS in action, 
for the hard of thinking please?
In simple terms we have 5 devices directly connected to each other (full 
mesh), and all 5 will have a CE (virtual-router) connected to it via ethernet 
logical tunnels.


Thanks!


Currently I'm doing something like this (snipped for berevity);


# show routing-instances vr-l2vpn
instance-type l2vpn;
interface lt-0/0/0.5036;
interface lt-0/0/0.5077;
interface lt-0/0/0.5135;
interface lt-0/0/0.5136;
route-distinguisher 500:5034;
vrf-target target:500:500;
protocols {
l2vpn {
encapsulation-type ethernet;
site fsed {
site-identifier 34;
interface lt-0/0/0.5036 {
remote-site-id 2;
}
interface lt-0/0/0.5077 {
remote-site-id 33;
}
interface lt-0/0/0.5135 {
remote-site-id 101;
}
interface lt-0/0/0.5136 {
remote-site-id 102;
}
}
}
}

# show interfaces lt-0/0/0
unit 135 {
encapsulation ethernet;
peer-unit 5135;
family inet{
address 10.200.135.35/24;
}
}
unit 5135 {
encapsulation ethernet-ccc;
peer-unit 135;
family ccc {
filter {
input packet-mode-ccc;
}
}
}

# show protocols mpls
path-mtu {
rsvp mtu-signaling;
}
label-switched-path fsed-rmdcjs1 {
from a.b.c.d;
to w.x.y.z;
bandwidth 90m;
no-cspf;
fast-reroute;
primary fsed-rmdcjs1;
}
path fsed-rmdcjs1 {
e.f.g.h strict;
}


-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX RE how fast is slow

2011-09-08 Thread Mike Williams
Hi all,

Recently a discussion touched on the routing engine speed of the MX series, 
but there wasn't much like a real world comparison.
So my question is, how slow is the RE on an MX80 compared to it's bigger 
brethren?

I ask because we find the MX80 slow, really slow.
As we've got 2 distinctly different traffic types, and 2 distinctly different 
upstreams (1Gbps and 10Gbps), we're using a rib group and policy to populate 
2 additional ribs with different local preferences applied to the learnt 
routes. Filters direct packets to the right table.
It'll take the RE a good 10-15 minutes to churn through that job, and that's a 
bit annoying when you make a small change to a unrelated policy!
Now, is that us being stupid, or the RE being slow? I know what I'd like to 
hear :)


Cheers

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX RE how fast is slow

2011-09-08 Thread Mike Williams
Sorry, the commit time is fine, couple seconds tops.

# show | display set | count
Count: 1057 lines

After that the RE goes off and and runs the 700k routes it has already got in 
inet.0 through the policy, repopulating the 2 other RIBs. It's this which 
takes the time.

The fact it takes several minutes to run is only a moderate annoyance.
Especially now others have confirmed that the MX80 really isn't that fast at 
all.

On Thursday 08 September 2011 14:19:19 Scott T. Cameron wrote:
 What you're saying isn't too clear by churn through that job.

 Do you mean when your upstream routing sessions are coming up, it takes 15
 minutes to process all the routes?
 Do you mean commit?

 Scott

 On Thu, Sep 8, 2011 at 7:41 AM, Mike Williams 
mike.willi...@comodo.comwrote:
  Hi all,
 
  Recently a discussion touched on the routing engine speed of the MX
  series, but there wasn't much like a real world comparison.
  So my question is, how slow is the RE on an MX80 compared to it's bigger
  brethren?
 
  I ask because we find the MX80 slow, really slow.
  As we've got 2 distinctly different traffic types, and 2 distinctly
  different
  upstreams (1Gbps and 10Gbps), we're using a rib group and policy to
  populate
  2 additional ribs with different local preferences applied to the learnt
  routes. Filters direct packets to the right table.
  It'll take the RE a good 10-15 minutes to churn through that job, and
  that's a
  bit annoying when you make a small change to a unrelated policy!
  Now, is that us being stupid, or the RE being slow? I know what I'd like
  to hear :)
 
 
  Cheers
 
  --
  Mike Williams
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Making an SRX a Router and Security Device, for an enterprise

2011-07-13 Thread Mike Williams
Hi all,

Two of us here have almost lost our minds trying to do this. So we're hoping 
for a suitably strong fu-injection.

We have an MX80 and an SRX650, and shortly we'll have another MX and SRX, with 
further to come.
Elsewhere we've got a mixture of SRX, J-series (some Js all packet-mode, some 
flow-mode), and a couple Ms. Little of this is relevant though.

Obviously we got the little SRX to offer security, as only certain traffic 
has to be filtered. We also got it so all our intersite traffic can be nicely 
encrypted (IPSec).


We're pretty happy with policy filters, routing instances, interface 
rib-groups, etc, on the MX to send the right traffic to the SRX.

But one area we're seriously stuck on is the IPSec, and the asymmetry in 
intersite routing that will occur. We're anycasting services.

The available documentation, and examples floating around the interwebs, are 
aimed more at service providers doing MPLS with customers, but we're an 
enterprise with little interest in MPLS and no customers routes we need to 
carry separately.

What I *think* we need is;

st0.X/Y/Z in the master.
gr-0/0/0.X/Y/Z over st0.X/Y/Z tied to a packet-mode VR. Intersite OSPF.
ge-2/0/8.X tied to the same packet-mode VR. OSPF to the MX.
ge-2/0/9.X/Y/Z tied to another VR, but flow-mode. Various interfaces and zones 
for the MX to forward to, policies and snat, an interface for the SRX to 
return SNAT'd traffic, and a couple static routes for it to return the 
traffic to the right unit on the MX.

How to go about making a packet-mode VR and a flow-mode VR, and whether I 
am infact using the right terms, is where I'm mostly stuck.

One thing I don't believe we want, and is something fairly central to one 
document (3500192-en.pdf) is the lt- interface between Services-VR 
and Packet-VR.
We think that's counter-productive for us, as having flow-mode services act 
as 'an-SRX on a stick' is right, so traffic to the MX from another site is 
only seen (i.e. not encapsulated) and statefully tracked if we explicitally 
choose it to be so.


Any hints on configuration, pointers to the One True Path (TM), or even 
confirmation we're already on the right path, all gladly welcomed.

Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] How does multihop eBGP work?

2011-06-24 Thread Mike Williams
Hey guys,

I've got a situation I think I need multihop bgp for (logical-systems and 
bridge-domains).
However it bugs me deeply that I don't get multihop BGP.

My biggest bugbear is if my multihop-ebgp peer tells me he know the best way 
to x.x.x.x, the packets I send towards him must be routed by intermediaries, 
will those intermediaries use their tables and hijack my packets down their 
bits of wet string through 15 other ASs and to the moon and back?

Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] How does multihop eBGP work?

2011-06-24 Thread Mike Williams
On Friday 24 June 2011 17:49:28 Patrick Okui wrote:
 BGP only populates your idea of the next hop towards your destination.
 Once your packets leave your network to the intermediary autonomous
 systems they forward the packets based on their idea of the best next hop.

 Short of some combination of tunnelling /or encryption there's no real
 way for you to control/verify what happened to the packets in transit.

Thanks to all who replied.

I was sort of hoping there would be a magical auto-encapsulation feature that 
nobody ever spoke about.

We've solved our original problem in a neatly elegant way, without multi-hop 
ebgp.

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] More RAM in an SRX650

2011-04-15 Thread Mike Williams
Hi,

Has anyone ever tried increasing the RAM in an SRX650?

We've just got 3, and each has 4 slots but only a single 2GB stick.
The flowd process uses ~40% of that with no rules or trafic! I do understand 
why flowd takes so much RAM straight away, but still.

I'm having trouble finding low-profile DDR2 PC2-6400 sticks though. Everyone, 
and their Gran, can do regular profile (30mm) sticks but low-profile 
(18-19mm) PC2-6400 seems rarer than hens teeth. A supplier can do us DDR2 
PC2-5300, but the lower speed concerns me.

Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Host-to-Host IPSec, Openswan to Junos

2010-11-18 Thread Mike Williams
Hey guys,

Is anyone doing, or know how to do, IPSec tunnels between Openswan and Junos?
Openswan 2.4 on kernel 2.6 to Junos 10.2R3.10 on a J-series to be precise.

So far I've got phase 1 to complete, but phase 2 fails like this:

KMD_PM_P2_POLICY_LOOKUP_FAILURE: Policy lookup for Phase-2 [responder] failed 
for p1_local=ipv4(udp:500,[0..3]=85.234.234.118) p1_remote=ipv4(any:0,
[0..3]=81.123.123.98) p2_local=ipv4_subnet(any:0,[0..7]=85.123.123.116/30) 
p2_remote=ipv4_subnet(any:0,[0..7]=81.234.234.96/29)

Yet I have:

mi...@thejay# show security ipsec vpn mcroffce_vpn
bind-interface st0.0;
ike {
gateway mcroffice_gateway;
proxy-identity {
local 85.234.234.116/30;
remote 81.123.123.96/29;
service any;
}
ipsec-policy ipsec_pol_1;
}
establish-tunnels immediately;


Ideally I'd like the tunnel between 118/32 and 98/32 as I'll be routing stuff 
down a GRE tunnel over IPSec.

With no (left|right)subnet defined in Openswan the P2 policy wanted is;

p1_local=ipv4(udp:500,[0..3]=85.234.234.118) p1_remote=ipv4(any:0,
[0..3]=81.123.123.98) p2_local=ipv4(any:0,[0..3]=85.234.234.118) 
p2_remote=ipv4(any:0,[0..3]=81.123.123.98)

You *have* to specify address/prefix in proxy-identity though, so that 
couldn't possibly work as no CIDR mask is given in the request.


Could any one possibly enlighten me please?


Thanks

-- 
Mike Williams
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX3400/3600 Stabie Code Recommendations?

2010-07-24 Thread Mike Williams
On Saturday 24 July 2010 15:26:29 Scott T. Cameron wrote:
 Datacenter 2 is back running 10.0R3 with no problems.  It lacks V6, but is
 quite stable.

Hi, could you possibly expand on lacks V6 please?
We're looking at deploying some SRX3600s, and IPv6 is something we really want 
to do.

Thanks

-- 
Mike Williams
Senior Systems Administrator

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX3400/3600 Stabie Code Recommendations?

2010-07-24 Thread Mike Williams
Seriously, a 3600 not on 10.2 won't do IPv6 at all?
Not even plain routing?

On Saturday 24 July 2010 22:31:13 Scott T. Cameron wrote:
 Only on the low end models,  3400 and higher has no support.

 Scott

 On Sat, Jul 24, 2010 at 5:19 PM, Mark Kamichoff p...@prolixium.com wrote:
  On Sat, Jul 24, 2010 at 10:00:36PM +0100, Mike Williams wrote:
   Hi, could you possibly expand on lacks V6 please?
 
  The one big change in 10.2 for the SRX platforms is the addition of IPv6
  flow mode.  The SRXes will still pass IPv6 traffic in earlier releases,
  but without any policy evaluation.
 
  - Mark
 
  --
  Mark Kamichoff
  p...@prolixium.com
  http://www.prolixium.com/
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
Mike Williams
Senior Systems Administrator
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp