Re: [j-nsp] IGMP problem

2013-09-10 Thread Vladislav Vasilev
Hi Robert,

What you have below only adds the interface to the OIL for that group. No IGMP 
joins are generated!

Regards,
Vladislav A. VASILEV


On 10 Sep 2013, at 07:51, Robert Hass wrote:

 Hi
 I would like to setup static IGMP joins between Cisco and Juniper.
 But it's not working. Juniper is not sending IGMP Joins.
 Same configuration Cisco + Cisco working without issues. Any clues ?
 
 Interface configuration for Juniper at Cisco side:
 
 interface GigabitEthernet1/1/1
 description Juniper
 no switchport
 ip address 10.10.10.21 255.255.255.252
 ip pim passive
 !
 
 Here is output of IGMP membership - none :(
 
 cisco#sh ip igmp membership | include GigabitEthernet1/1/1
 cisco#
 
 Here is JunOS configuration:
 
 interfaces {
ge-0/0/0 {
unit 0 {
family inet {
address 10.10.10.22/30;
}
}
}
 routing-options {
static {
route 0.0.0.0/0 next-hop 10.10.10.21;
}
 }
 protocols {
igmp {
interface ge-0/0/0.0 {
version 2;
static {
group 231.0.0.3;
group 231.0.0.4;
}
}
}
pim {
rp {
static {
address 10.10.10.255 {
version 2;
}
}
}
interface ge-0/0/0.0 {
mode sparse;
version 2;
}
join-load-balance;
}
 }
 
 Rob
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IGMP problem

2013-09-10 Thread Vladislav Vasilev
Robert,

Just noticed you actually have ip pim passive under the interface...

The ip igmp join-group in Cisco IOS generates IGMP joins (and PIM joins 
upstream), and packets sent to the group address get sent up to the CPU (the 
router would reply back to icmp-echo packets sent to the group address - 
convenient for troubleshooting).

On the other hand, the ip igmp static-group in Cisco IOS generates IGMP joins 
(and PIM joins upstream), but packets sent to the group address do not get sent 
up to the CPU.

As Krasi said, in JunOS, you still have the PIM joins upstream, but no IGMP 
joins are generated.

Regards,
Vladislav A. VASILEV


On 10 Sep 2013, at 11:24, Krasimir Avramski wrote:

 Hello,
 Actually this config generates PIM (*,G) joins upstream to RP.
 I'm not aware of static igmp joins(generated) or igmp proxies support in 
 junos (excluding junosE) - though there is a  feature that translates PIM to 
 IGMP/MLD  
 
 Krasi
 
 
 On 10 September 2013 12:55, Vladislav Vasilev vladislavavasi...@gmail.com 
 wrote:
 Hi Robert,
 
 What you have below only adds the interface to the OIL for that group. No 
 IGMP joins are generated!
 
 Regards,
 Vladislav A. VASILEV
 
 
 On 10 Sep 2013, at 07:51, Robert Hass wrote:
 
  Hi
  I would like to setup static IGMP joins between Cisco and Juniper.
  But it's not working. Juniper is not sending IGMP Joins.
  Same configuration Cisco + Cisco working without issues. Any clues ?
 
  Interface configuration for Juniper at Cisco side:
 
  interface GigabitEthernet1/1/1
  description Juniper
  no switchport
  ip address 10.10.10.21 255.255.255.252
  ip pim passive
  !
 
  Here is output of IGMP membership - none :(
 
  cisco#sh ip igmp membership | include GigabitEthernet1/1/1
  cisco#
 
  Here is JunOS configuration:
 
  interfaces {
 ge-0/0/0 {
 unit 0 {
 family inet {
 address 10.10.10.22/30;
 }
 }
 }
  routing-options {
 static {
 route 0.0.0.0/0 next-hop 10.10.10.21;
 }
  }
  protocols {
 igmp {
 interface ge-0/0/0.0 {
 version 2;
 static {
 group 231.0.0.3;
 group 231.0.0.4;
 }
 }
 }
 pim {
 rp {
 static {
 address 10.10.10.255 {
 version 2;
 }
 }
 }
 interface ge-0/0/0.0 {
 mode sparse;
 version 2;
 }
 join-load-balance;
 }
  }
 
  Rob
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] POLICY: from next-hop

2013-09-03 Thread Vladislav VASILEV
I guess it is logical and I was hoping to be able to make my life easier :-).

I did my tests with  without FIB indirect-nexthops. No change in behaviour.

Vladi

On 2 Sep 2013, at 23:47, Daniel Roesen d...@cluenet.de wrote:

 On Mon, Sep 02, 2013 at 11:26:46PM +0100, Vladislav A. VASILEV wrote:
 Figured it out. My intention was to filter routes by performing matches
 against each protocol next hop. However, this is not supported when the
 policy is applied under routing-options forwarding-table export. Has
 anyone seen this documented anywhere on juniper.net?

 Isn't this kinda logical?

 You're filtering RIB-to-FIB, so protocol next-hops aren't (in
 first-order approximation[1]) not in play anymore, just forwarding
 next-hops. So at this point, I would expect from next-hop to match
 forwarding next-hops, not protocol next-hops.

 Best regards,
 Daniel

 [1] unsure wether/how FIB indirect-nexthops might change the picture.

 --
 CLUE-RIPE -- Jabber: d...@cluenet.de -- dr@IRCnet -- PGP: 0xA85C8AA0
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] VPLS issues

2012-11-30 Thread Vladislav VASILEV
You need to use the strict keyword when installing the LSP.

Sent from my iPhone

On 30 Nov 2012, at 17:29, Richard A Steenbergen r...@e-gerbil.net wrote:

 Does anybody have any experience with forced LSP path selection for VPLS
 circuits? Long story short, when we fire up traffic on one particular
 VPLS instance, we're seeing SOME of the traffic it's carrying being
 blackholed. The pattern is one of certain IP or even TCP port pairs
 being blocked, and it seems to rotate over time, which screams hashing
 across multiple LSPs where one of them is doing something bad, and it
 changes as the LSPs resignal over time to me. To try and lock this
 down, I'm trying to force the VPLS traffic to route over a single LSP,
 in the usual manner with a forwarding-table export policy, and a very
 simple extended community regexp against the vrf-target community.

 term VPLS {
from community MATCH_VPLS;
then {
install-nexthop lsp-regex .*-SILVER.*;
load-balance per-packet;
accept;
}
 }

 But it sure as hell doesn't look like it's narrowing the LSP selection:

 ras@re0.router show route forwarding-table family vpls table blah
 Routing table: blah.vpls
 VPLS:
 DestinationType RtRef Next hop   Type Index NhRef Netif
 ...
 00:xx:xx:xx:xx:xx/48 user 0  indr 1050634 5
 idxd  3223 2
   idx:1  xx.xx.142.132 Push 262153, Push 655412(top)  
 4543 1 xe-7/3/0.0
   idx:1  xx.xx.142.62  Push 262153, Push 752660, Push 
 691439(top)  1315 1 xe-4/1/0.0
   idx:2  xx.xx.142.132 Push 262153, Push 758372(top)  
 1923 1 xe-7/3/0.0
   idx:2  xx.xx.142.62  Push 262153, Push 382341, Push 
 691439(top)  2541 1 xe-4/1/0.0
   idx:3  xx.xx.142.132 Push 262153, Push 758372(top)  
 1923 1 xe-7/3/0.0
   idx:3  xx.xx.142.62  Push 262153, Push 382341, Push 
 691439(top)  2541 1 xe-4/1/0.0
   idx:4  xx.xx.142.30  Push 262153, Push 714676(top)  
 1500 1 xe-4/1/1.0
   idx:4  xx.xx.142.62  Push 262153, Push 619458, Push 
 378636(top)  3864 1 xe-4/1/0.0
   idx:xx xx.xx.142.82  Push 262153, Push 601828(top)  
  989 1 xe-5/0/0.0
   idx:xx xx.xx.142.132 Push 262153, Push 684644(top)  
 3516 1 xe-7/3/0.0
   idx:xx xx.xx.142.62  Push 262153, Push 528898, Push 
 760875(top)  4766 1 xe-4/1/0.0
   idx:xx xx.xx.142.62  Push 262153, Push 792036, Push 
 691439(top)  3473 1 xe-4/1/0.0

 Any ideas, about this or about troubleshooting the forwarding plane for
 VPLS in general? Other than that VPLS just sucks... :)

 --
 Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
 GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Multicast Issues with a Logical System

2011-01-19 Thread Vladislav Vasilev
Hello,

I am trying to figure out what is wrong with the following setup:

Streamer (239.1.2.3) --- fe-0/2/0.129 (Logical System 1) fe-0/2/0.12
--- fe-0/2/1.21 (Logical System 2) --- Receiver

I have the following configuration on LS1:

r1@M5:r1# run show configuration protocols pim
interface fe-0/2/0.129 {
mode dense;
version 2;
}

The multicast traffic is hitting the fe-0/2/0.129 interface but the
router is not populating its mroute table:

r1@jncie:r1# run show multicast route
Family: INET

Family: INET6

Even if I try to put the mcast group as a static group on either LS1
or LS2 it does not make it in the mroute table.

I am running JunOS 9.3R4.4.

Thanks!

Best Regards,
V.Vasilev
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hard Disk Replacement - M5

2011-01-11 Thread Vladislav Vasilev
I guess I am going for the PCMCIA method.

Thanks.

Regards,
V.Vasilev

On Tue, Jan 11, 2011 at 10:09 AM, Alex alex.arsen...@gmail.com wrote:
 I believe if you reinstall FreeBSD on this disk (minimal install will do) it
 will be recognised by request system partition.
 HTH
 Regards
 Alex

 - Original Message - From: Vladislav Vasilev
 vvasi...@vvasilev.net
 To: juniper-nsp@puck.nether.net
 Sent: Tuesday, January 11, 2011 1:13 AM
 Subject: [j-nsp] Hard Disk Replacement - M5


 Hello,

 I am trying to replace a hard drive on an old M5 running JunOS 6.2.
 The RE installs the drive which can be seen from the following output:

 r...@m5 show system boot-messages

 ad0: 91MB SanDisk SDCFB-96 [734/8/32] at ata0-master using PIO1
 ad1: 11513MB IBM-DARA-212000 [23392/16/63] at ata0-slave using UDMA33
 Mounting root from ufs:/dev/ad0s1a

 The problem is that the RE won't partition it:

 r...@m5 request system partition hard-disk
 mount: /dev/ad1s1e: Device not configured
 ERROR: Can't access hard disk, aborting partition.

 The hard drive had been erased with dd.

 Has anyone come across this problem? Am I missing something here?

 Thank you!

 Kind Regards,
 V. Vasilev
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hard Disk Replacement - M5

2011-01-11 Thread Vladislav Vasilev
I tried that but it did not work either.

The install-media image did the job.

Thanks!

Regards,
V.Vasilev

On Tue, Jan 11, 2011 at 12:49 PM, Miroslav Georgiev mgeorg...@spnet.net wrote:
 Start a shell and you will have dd there. After that you can do request
 system partition.
 If you have an export-cf[512|1024] version of junos you can use it too.

 On 11.01.2011 14:19, Vladislav Vasilev wrote:

 I guess I am going for the PCMCIA method.

 Thanks.

 Regards,
 V.Vasilev

 On Tue, Jan 11, 2011 at 10:09 AM, Alexalex.arsen...@gmail.com  wrote:


 I believe if you reinstall FreeBSD on this disk (minimal install will do)
 it
 will be recognised by request system partition.
 HTH
 Regards
 Alex

 - Original Message - From: Vladislav Vasilev
 vvasi...@vvasilev.net
 To:juniper-nsp@puck.nether.net
 Sent: Tuesday, January 11, 2011 1:13 AM
 Subject: [j-nsp] Hard Disk Replacement - M5




 Hello,

 I am trying to replace a hard drive on an old M5 running JunOS 6.2.
 The RE installs the drive which can be seen from the following output:

 r...@m5  show system boot-messages

 ad0: 91MBSanDisk SDCFB-96  [734/8/32] at ata0-master using PIO1
 ad1: 11513MBIBM-DARA-212000  [23392/16/63] at ata0-slave using UDMA33
 Mounting root from ufs:/dev/ad0s1a

 The problem is that the RE won't partition it:

 r...@m5  request system partition hard-disk
 mount: /dev/ad1s1e: Device not configured
 ERROR: Can't access hard disk, aborting partition.

 The hard drive had been erased with dd.

 Has anyone come across this problem? Am I missing something here?

 Thank you!

 Kind Regards,
 V. Vasilev
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp





 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp




 --
 Regards,,,
 Miroslav Georgiev
 SpectrumNet Jsc.
 +(359 2)4890604
 +(359 2)4890619


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Hard Disk Replacement - M5

2011-01-10 Thread Vladislav Vasilev
Hello,

I am trying to replace a hard drive on an old M5 running JunOS 6.2.
The RE installs the drive which can be seen from the following output:

r...@m5 show system boot-messages

ad0: 91MB SanDisk SDCFB-96 [734/8/32] at ata0-master using PIO1
ad1: 11513MB IBM-DARA-212000 [23392/16/63] at ata0-slave using UDMA33
Mounting root from ufs:/dev/ad0s1a

The problem is that the RE won't partition it:

r...@m5 request system partition hard-disk
mount: /dev/ad1s1e: Device not configured
ERROR: Can't access hard disk, aborting partition.

The hard drive had been erased with dd.

Has anyone come across this problem? Am I missing something here?

Thank you!

Kind Regards,
V. Vasilev
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] link colouring on tunnel interfaces

2010-11-04 Thread Vladislav Vasilev
Is link colouring supported on lt interfaces? The admin-group
command seems to be not available under such interfaces?

Regards,
V. Vasilev
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] PIM neighbors

2010-08-26 Thread Vladislav Vasilev
I have a M7i router that does not report its PIM neighbors - Redback
SE1200 and Cisco 7600. I can see the M7i on both the SE1200 and the
7600.

interface ae0.803 {
mode sparse;
version 2;
}

M7i
x...@xxx-xxx# run show pim neighbors
Instance: PIM.master

[edit]

Redback
[xxx]r1p1#show ip pim ne
PIM Neighbor Table
Neighbor Address Interface  Uptime   DR Prio  GenIDDR SR
Expire   hex  hex Intvl
10.x.x.x   UPLINK 00:08:21 159ccdf7f
00:01:23


Multicast traffic does flow though. Has anyone come across this? I am
running JUNOS 10.0R1.8.

Best Regards,
V. Vasilev
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Using fxp0 as a routed interface

2010-03-30 Thread Vladislav Vasilev
Hello!

I know I can use fxp0 as a routed interface on M7i by setting:

sysctl -w net.pfe.transit_re=1

but this seems to be not possible for M160? Has anyone been able to do
to it on it?

P.S. It is for training purposes, so the CPU will be OK


Regards,
V.Vasilev
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] logical systems unstability

2010-02-09 Thread Vladislav Vasilev
Hello!

I might have run into a bug while trying to get some logical routers up and
running. As soon as I configure them the router reboots itself and starts
dropping BGP sessions, dropping packets etc. I am running JunOS 9.6R2.1 on a
M7i. Has anything like this happened to any of you?


Regards,
V.Vasilev
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp