[j-nsp] Policy to Manipulate the Local Preference of VPNV4 routes

2010-05-30 Thread Sorilla, Edmar (NSN - AE/Dubai)
Hi Experts,

Please share if you know how to manipulate local preference of vpnv4
route

Thanks,
Edmar

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Logical Tunnels & IPv6

2010-05-30 Thread Mark Kamichoff
Hi - 

I just ran into what looks like an interesting limitation with logical
tunnels on JUNOS.  It seems that using logical tunnels with an
encapsulation type of ethernet results in the inability to use IPv6 on
such interfaces.

I tried the following on an MX240 running 9.5R1.8:

{master}[edit logical-systems]
l...@mx240-lab01-re0# show r1 interfaces lt-2/0/10.0  
encapsulation ethernet;
peer-unit 1;
family inet {
address 10.0.4.5/30;
}
family inet6 {
address fec0:0:4:4::/64 {
eui-64;
}
}

{master}[edit logical-systems]
l...@mx240-lab01-re0# commit check  
[edit logical-systems r1 interfaces lt-2/0/10 unit 0]
  'family'
 family INET6 not allowed with this encapsulation
error: configuration check-out failed

(yes, I know, those are deprecated site-local addresses -  this config
is straight out of the ancient JNCIE study guide)

Just for kicks, I tried switching to encapsulation vlan, added a vlan-id
to both sides, but JUNOS still complained about the inet6 family not
being supported.

Am I hitting some limitation of the built-in tunnel PIC on the
MX-series?  Or, maybe this is a code issue?  I can upgrade this box to
anything if needed, since it's just used for lab testing.

Anyone else run into this?

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Logical Tunnels & IPv6

2010-05-30 Thread Brian Fitzgerald
Hi

We ran into this one as well - JunOS 9.3 on an MX480

We couldn't get the logical tunnel between logical systems to take an IPv6
address unless we used encapsulation frame-relay - no other appeared to
work.  Why this was our Juniper rep couldn't answer - just pointed us at the
frame-relay workaround.

Wasn't a big deal, except watch the MTU - frame-relay default MTU was around
4450 (don't exactly remember), so if you are using jumbo frames on any of
the Ethernet interfaces in the logical systems, you will want to set the
tunnel MTU accordingly to avoid fragmentation.

Brian


On 10-05-30 2:21 PM, "Mark Kamichoff"  wrote:

> Hi - 
> 
> I just ran into what looks like an interesting limitation with logical
> tunnels on JUNOS.  It seems that using logical tunnels with an
> encapsulation type of ethernet results in the inability to use IPv6 on
> such interfaces.
> 
> I tried the following on an MX240 running 9.5R1.8:
> 
> {master}[edit logical-systems]
> l...@mx240-lab01-re0# show r1 interfaces lt-2/0/10.0
> encapsulation ethernet;
> peer-unit 1;
> family inet {
> address 10.0.4.5/30;
> }
> family inet6 {
> address fec0:0:4:4::/64 {
> eui-64;
> }
> }
> 
> {master}[edit logical-systems]
> l...@mx240-lab01-re0# commit check
> [edit logical-systems r1 interfaces lt-2/0/10 unit 0]
>   'family'
>  family INET6 not allowed with this encapsulation
> error: configuration check-out failed
> 
> (yes, I know, those are deprecated site-local addresses -  this config
> is straight out of the ancient JNCIE study guide)
> 
> Just for kicks, I tried switching to encapsulation vlan, added a vlan-id
> to both sides, but JUNOS still complained about the inet6 family not
> being supported.
> 
> Am I hitting some limitation of the built-in tunnel PIC on the
> MX-series?  Or, maybe this is a code issue?  I can upgrade this box to
> anything if needed, since it's just used for lab testing.
> 
> Anyone else run into this?
> 
> - Mark

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Logical Tunnels & IPv6

2010-05-30 Thread Richard A Steenbergen
On Sun, May 30, 2010 at 05:21:03PM -0400, Mark Kamichoff wrote:
> Hi - 
> 
> I just ran into what looks like an interesting limitation with logical
> tunnels on JUNOS.  It seems that using logical tunnels with an
> encapsulation type of ethernet results in the inability to use IPv6 on
> such interfaces.

It's always been like this, and Juniper has ignored all requests to add
support for IPv6 with ethernet encapsulation on the LT. The only
work-around is to use frame-relay encapsulation instead of ethernet,
which works for most but not all use cases.

The one where it really bit us was where logical-system A provides a
l2circuit to interconnect another logical-system B with other remote
devices. If you want to speak to a non-Juniper device on the other side
(or otherwise not have matching LT interfaces on both endpoints), you
can't run IPv6. In the end we just ended up having to scrap the logical 
system B on the Juniper due to lack of IPv6 support.

-- 
Richard A Steenbergenhttp://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Logical-Systems serie M and MX

2010-05-30 Thread Gabriel Farias
Thanks, I'll use the suggestion Mohan, established neighborhoods EBGP
between Ls1 and LS2 to LS3

Gabriel.

2010/5/29 Alan Gravett 

> Gabriel,
>
> You cannot copy routes between Logical Systems, as there is a separate RPD
> instance
> for each LS. (unlike VRs where rib-groups etc can be used)
>
> If you have tunnel capabilities in the chassis (check for presence of lt-*
> interface) you can interconnect LS's in this way.
>
> Alan
>
>
> On Fri, May 28, 2010 at 10:24 PM, Gabriel Farias <
> gabrielfaria...@gmail.com> wrote:
>
>> Thanks this would be the only option?
>>
>> Best regard,
>> Gabriel Farias
>>
>>
>>
>> 2010/5/28 Mohan Nanduri 
>>
>> > you can configure bgp session between the logical systems, as they are
>> like
>> > a true separate router.
>> >
>> > On Fri, May 28, 2010 at 3:27 PM, Gabriel Farias <
>> gabrielfaria...@gmail.com
>> > > wrote:
>> >
>> >> Hello Gentlemen,
>> >>
>> >>
>> >> I have a chassis M10i (Junos 9.6R1.13) with three-logical systems (LS1,
>> >> LS2
>> >> and LS3) configured and running, need to copy the routing tables of the
>> >> LS1
>> >> and LS2 to LS3, I have some questions:
>> >>
>> >>
>> >>
>> >> 1)  Can I copy this?
>> >>
>> >> 2)  How best to do if possible?
>> >>
>> >>
>> >> Done in consultation with J-net and also in part of Juniper support
>> site
>> >> and
>> >> found much documentation, but nothing talking about this specific
>> issue.
>> >>
>> >> Thanks,
>> >> Gabriel Farias
>> >> ___
>> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> >>
>> >
>> >
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Logical Tunnels & IPv6

2010-05-30 Thread Mark Kamichoff
On Sun, May 30, 2010 at 05:59:59PM -0500, Richard A Steenbergen wrote:
> It's always been like this, and Juniper has ignored all requests to add
> support for IPv6 with ethernet encapsulation on the LT. The only
> work-around is to use frame-relay encapsulation instead of ethernet,
> which works for most but not all use cases.

Thanks guys.  I'll give the frame-relay encapsulation a try!

Perhaps we just need a few large carriers to help "nudge" Juniper on
this.  I suppose it'll be added eventually though, as more folks start
to add IPv6 to existing IPv4 configurations.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Logical Tunnels & IPv6

2010-05-30 Thread Chuck Anderson
On Sun, May 30, 2010 at 05:21:03PM -0400, Mark Kamichoff wrote:
> I just ran into what looks like an interesting limitation with 
> logical tunnels on JUNOS.  It seems that using logical tunnels with 
> an encapsulation type of ethernet results in the inability to use 
> IPv6 on such interfaces.

Yes, and I believe the reason why this is the case is because 
logical-tunnels use the same MAC address on each end.  Since IPv6 uses 
the MAC address to generate the link-local address by default, that 
may be why they prevent you from configuring inet6 on lt.

For another interesting case, if you create a l2circuit or l2vpn using 
logical tunnel interfaces from the same tunnel PIC on both ends of the 
l2circuit/l2vpn (say in a lab environment where all the routers are 
logical systems on one physical router), you will run into ARP issues 
because both ends use the same MAC address.  Both CE's will 
continually ARP for the other CE, but they will both ignore each 
other's ARP requests because they come from their "own" MAC.  You can 
work around this by using static ARP entries that point to the same 
MACs on each end, which "shouldn't" work but it does...IP traffic 
passes fine despite the fact that there is a duplicate MAC on the 
CE-CE subnet.  I haven't tried family inet6 here though.

l2circuit from CE:c1 on PE:r4 to CE:c2 on PE:r6.

PE:r4 to CE:c1:

l...@main# show logical-routers r4 interfaces lt-1/3/0 unit 58
description "r4:fe-0/0/0.600 to c1";
encapsulation vlan-ccc;
bandwidth 100m;
vlan-id 600;
peer-unit 59;

l...@main# show logical-routers c1 interfaces lt-1/3/0 unit 59
description "c1 to r4:fe-0/0/0.600";
encapsulation vlan;
vlan-id 600;
peer-unit 58;
family inet {
address 192.168.16.1/24 {
arp 192.168.16.2 mac 00:90:69:bc:2c:db;
}
}

PE:r6 to CE:c2:

l...@main# show logical-routers r6 interfaces lt-1/3/0 unit 56 
description "r6:fe-0/1/3.600 to c2";
encapsulation vlan-ccc;
bandwidth 100m;
vlan-id 600;
peer-unit 57;

l...@main# show logical-routers c2 interfaces lt-1/3/0 unit 57
description "c2 to r6:fe-0/1/3.600";
encapsulation vlan;
vlan-id 600;
peer-unit 56;
family inet {
address 192.168.16.2/24 {
arp 192.168.16.1 mac 00:90:69:bc:2c:db;
}
}
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] SSG Dialup VPN stability problems

2010-05-30 Thread Jimmy Stewpot
Hello,

I am currently investigating some on-going stability problems with 
client-to-site vpn connections on a SSG140. Unfortunately I've been unable to 
find any detailed diagnostics steps to take when troubleshooting this type of 
issue. The site previously used a Cisco ASA and have since moved to Juniper's 
we are running 6.2.0r2 as the software version with client to site using a 
tunnel interface. 

The config as stated :
===SNIP===
set ike gateway "Remote_Dialup_VPN" dialup "Dialup_VPN_Group" Aggr 
outgoing-interface "ethernet0/3" preshare "" proposal 
"pre-g2-3des-md5" "pre-g2-3des-sha" "pre-g2-aes128-md5" "pre-g2-aes128-sha"
set ike gateway "Remote_Dialup_VPN" dpd-liveness interval 20
set ike gateway "Remote_Dialup_VPN" dpd-liveness always-send
unset ike gateway "Remote_Dialup_VPN" nat-traversal udp-checksum
set ike gateway "Remote_Dialup_VPN" nat-traversal keepalive-frequency 20
set ike gateway "Remote_Dialup_VPN" xauth server "AD_Radius" user-group 
"VPN.Users"
unset ike gateway "Remote_Dialup_VPN" xauth do-edipi-auth
set vpn "Remote_Dialup_VPN" gateway "Remote_Dialup_VPN" replay tunnel idletime 
0 proposal "nopfs-esp-3des-sha"  "nopfs-esp-3des-md5"  "nopfs-esp-des-sha"  
"nopfs-esp-des-md5" 
set vpn "Remote_Dialup_VPN" id 0x6 bind interface tunnel.3
set vpn "Remote_Dialup_VPN" dscp-mark 0
set vpn "Remote_Dialup_VPN" proxy-id local-ip 192.168.0.0/16 remote-ip 
255.255.255.255/32 "ANY" 
set address "VPN" "Dialup_IPPool" 10.10.40.0 255.255.255.0
set ippool "IPPool" 10.10.40.2 10.10.40.254


&&

set interface "tunnel.3" zone "VPN"
set interface tunnel.3 ip unnumbered interface ethernet0/3
set vpn "Remote_Dialup_VPN" id 0x6 bind interface tunnel.3
set vpn "Remote_VPN_to_DMZ" id 0x9 bind interface tunnel.3
set route 10.10.40.0/24 interface tunnel.3 permanent

&&


set auth-server "AD_Radius" account-type l2tp xauth 
set user-group "VPN.Users" type l2tp xauth 
set ike gateway "Remote_Dialup_VPN" xauth server "AD_Radius" user-group 
"VPN.Users"
unset ike gateway "Remote_Dialup_VPN" xauth do-edipi-auth
set xauth lifetime 30
set xauth default ippool "IPPool"
set xauth default dns1 192.168.10.1
set xauth default dns2 192.168.10.2
set xauth default wins1 192.168.10.1
set xauth default wins2 192.168.10.2
set xauth default auth server "AD_Radius"
set xauth default accounting server "AD_Radius"

===SNIP===

Now the problem we have is that very often systems can't remain connected for 
more than a few seconds while other users can be stable as a rock. This is 
despite both systems having identical configurations with either the Shrew 
client or the Juniper VPN client. One thing that I do see is a huge number of 
replay packets detected in the error logs, Does that have something to do with 
it? Moving forward has anyone experienced similar problems in the past and what 
did they do to resolve them? I have been unable to identify any single problem 
as every time I connect I am able to stay online for days without being 
disconnected?.

Any feedback would be really appreciated.

Regards,

Jimmy Stwepot.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] LDP Establishment Problem

2010-05-30 Thread Walaa Abdel razzak
Hi

I have a problem with LDP establishment between MX240 and MX480. MPLS & LDP 
processes are up, family MPLS is configured. no firewall filter configured, 
after traceoption the protocol and flag error, i got the following message:

LDP: bad PDU id (x.x.x.x:0) from y.y.y.y

where x.x.x.x is the lo0 address of the my neighbor, y.y.y.y is the physical 
address on my neighbor interface (10G interface).

Note: The LDP on other interfaces on both routers.

Any suggestions?

Best Regards,
Walaa Abdel Razzak
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Sampling filter with ifIndex

2010-05-30 Thread Ramesh Karki
Hi,

 Thank you for your kind Response.

The issue is on Sampling traffic filter by interface “ifIndex” on JunOS 9.2
running on M10i. I have flow collector called “RansNet MNA” running on my
VM, it has many options to filter the traffic and get output, but I am
having problem on ifIndex filter with few interfaces IDs. we are unable to
get the data with the option Interface ID filter, RansNet show errors “such
record not found”. But SNMP---MRTG is okay on these interface.

Even enabling local-dump on the router, flow on these interfaces
(input/output)collect on ID 0 instead of correct iFindex ID.

...

May 31 09:59:45 v5 flow entry

May 31 09:59:45Src addr: 200.160.84.28

May 31 09:59:45Dst addr: 202.51.94.5

May 31 09:59:45Nhop addr: 202.51.66.41

*May 31 09:59:45Input interface: 0*

*May 31 09:59:45Output interface: 0*

May 31 09:59:45Pkts in flow: 1

May 31 09:59:45Bytes in flow: 1500

May 31 09:59:45Start time of flow: 28328786

May 31 09:59:45End time of flow: 28328786

May 31 09:59:45Src port: 36108

May 31 09:59:45Dst port: 44548

May 31 09:59:45TCP flags: 0x0

May 31 09:59:45IP proto num: 17

May 31 09:59:45TOS: 0x0

May 31 09:59:45Src AS: 19182

May 31 09:59:45Dst AS: 64512

May 31 09:59:45Src netmask len: 21

May 31 09:59:45Dst netmask len: 24

 May 31 10:00:45 v5 flow entry

May 31 10:00:45Src addr: 208.53.158.29

May 31 10:00:45Dst addr: 110.44.113.254

May 31 10:00:45Nhop addr: 110.44.112.253

*May 31 10:00:45Input interface: 0*

May 31 10:00:45Output interface: 123

May 31 10:00:45Pkts in flow: 3

May 31 10:00:45Bytes in flow: 4500

May 31 10:00:45Start time of flow: 28357260

May 31 10:00:45End time of flow: 28393685

May 31 10:00:45Src port: 80

May 31 10:00:45Dst port: 31667

May 31 10:00:45TCP flags: 0x10

May 31 10:00:45IP proto num: 6

May 31 10:00:45TOS: 0x0

May 31 10:00:45Src AS: 30058

May 31 10:00:45Dst AS: 45650

May 31 10:00:45Src netmask len: 18

May 31 10:00:45Dst netmask len: 23

 Is there anyone who have came across this type of issue?


Thank You,

Ramesh
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp