Re: [j-nsp] LLDP on LAG behaviour

2013-03-28 Thread Riccardo S
On the LAG I've only one link active for the time being.
LLDP adj is seen only on one side, on the other no.

Media converters are on both side, but unfortunately these media converter are 
9000 km away and I can't see the jumpers...

Here's the config on both sides of my EX:


 show configuration protocols lldp  
interface ae0.0;

{master:0}


In my experience media converter introduces problems like flapping, but not 
filtering some type of traffic
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LLDP on LAG behaviour

2013-03-27 Thread Riccardo S
mhhh... why you say it ?
Indeed there is a media converter (fiber to copper), but do you think that 
could be the problem ?

 Subject: Re: [j-nsp] LLDP on LAG behaviour
 From: bd...@comlinx.com.au
 Date: Wed, 27 Mar 2013 08:54:45 +1000
 CC: steve.ros...@gmail.com; juniper-nsp@puck.nether.net
 To: dim0...@hotmail.com
 
 At the risk of asking the obvious - are the devices directly connected, or is 
 there interim equipment in the path (media converters, NTUs etc)
 
 On 26/03/2013, at 5:30 PM, Riccardo S dim0...@hotmail.com wrote:
 
  SIDE A
  
  @xx show lldp statistics
  InterfaceParent Interface  Received  Unknown TLVs  With Errors  
  Discarded TLVs  Transmitted  Untransmitted
  ge-1/0/1.0   ae0.0 36006 0 00   
  400280 
  
  SIDE B
  
  @yy show lldp statistics
  InterfaceParent Interface  Received  Unknown TLVs  With Errors  
  Discarded TLVs  Transmitted  Untransmitted
  ge-1/0/1.0   ae0.0 0 0 00   
  360110  
  
  
  I saw it before, but it's strage since I do not guess that lldp pkt can be 
  filtered (I've BGP peering between two sites).
  
  Sometimes line flap and BGP goes down, but when it's up the line (and BGP) 
  I expect to have lldp neighbor up.
  
  tks
  
  
  From: steve.ros...@gmail.com
  Date: Mon, 25 Mar 2013 09:22:46 -0500
  Subject: Re: [j-nsp] LLDP on LAG behaviour
  To: dim0...@hotmail.com
  CC: juniper-nsp@puck.nether.net
  
  What does show lldp statistics show for the interfaces connected to each 
  other?
  
  On Mon, Mar 25, 2013 at 3:01 AM, Riccardo S dim0...@hotmail.com wrote:
  
  
  
  
  
  junos 11.4r5.7
  
  Tks
  
  From: steve.ros...@gmail.com
  Date: Sun, 24 Mar 2013 21:59:01 -0500
  
  
  Subject: Re: [j-nsp] LLDP on LAG behaviour
  To: dim0...@hotmail.com
  CC: juniper-nsp@puck.nether.net
  
  
  
  What version Junos? I ran into some LLDP bugs on early 11.4 versions. 
  
  On Wed, Mar 13, 2013 at 5:45 AM, Riccardo S dim0...@hotmail.com wrote:
  
  
  
  
  
  
  
  
  
  
  I’ve a couple of EX in VC configuration
  
  mode (same Junos) connected with two links in LAG mode and LLDP switched-on 
  on the
  
  aggregate interface.
  
  
  
  The problem is that on one VC I see
  
  lldp neighborship with the other, on the other nothing…
  
  
  
  Any idea ?
  
  
  
  
  
  
  
  4200A# run show lldp neighbors
  
  
  
  Local InterfaceParent InterfaceChassis Id  Port info
System Name
  
  
  
  ge-1/0/1.0 ae0.0   54:e0:31:32:bd:c0   ge-1/0/1.0   
4200B
  
  
  
  {master:0}[edit protocols]
  
  
  
  
  
  
  
  4200B# run show lldp neighbors
  
  
  
  {master:0}[edit protocols]
  
  Tks
  
  
  
  
  
  
  
  ___
  
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  
  https://puck.nether.net/mailman/listinfo/juniper-nsp
  
  

  

  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LLDP on LAG behaviour

2013-03-26 Thread Riccardo S
SIDE A

@xx show lldp statistics
InterfaceParent Interface  Received  Unknown TLVs  With Errors  Discarded 
TLVs  Transmitted  Untransmitted
ge-1/0/1.0   ae0.0 36006 0 00   
400280 

SIDE B

@yy show lldp statistics
InterfaceParent Interface  Received  Unknown TLVs  With Errors  Discarded 
TLVs  Transmitted  Untransmitted
ge-1/0/1.0   ae0.0 0 0 00   
360110  


I saw it before, but it's strage since I do not guess that lldp pkt can be 
filtered (I've BGP peering between two sites).

Sometimes line flap and BGP goes down, but when it's up the line (and BGP) I 
expect to have lldp neighbor up.

tks


From: steve.ros...@gmail.com
Date: Mon, 25 Mar 2013 09:22:46 -0500
Subject: Re: [j-nsp] LLDP on LAG behaviour
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net

What does show lldp statistics show for the interfaces connected to each 
other?

On Mon, Mar 25, 2013 at 3:01 AM, Riccardo S dim0...@hotmail.com wrote:





junos 11.4r5.7

Tks

From: steve.ros...@gmail.com
Date: Sun, 24 Mar 2013 21:59:01 -0500


Subject: Re: [j-nsp] LLDP on LAG behaviour
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net



What version Junos? I ran into some LLDP bugs on early 11.4 versions. 

On Wed, Mar 13, 2013 at 5:45 AM, Riccardo S dim0...@hotmail.com wrote:










I’ve a couple of EX in VC configuration

mode (same Junos) connected with two links in LAG mode and LLDP switched-on on 
the

aggregate interface.



The problem is that on one VC I see

lldp neighborship with the other, on the other nothing…



Any idea ?







4200A# run show lldp neighbors



Local InterfaceParent InterfaceChassis Id  Port info  
System Name



ge-1/0/1.0 ae0.0   54:e0:31:32:bd:c0   ge-1/0/1.0 
4200B



{master:0}[edit protocols]







4200B# run show lldp neighbors



{master:0}[edit protocols]

Tks







___

juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp


  

  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LLDP on LAG behaviour

2013-03-25 Thread Riccardo S
junos 11.4r5.7

Tks

From: steve.ros...@gmail.com
Date: Sun, 24 Mar 2013 21:59:01 -0500
Subject: Re: [j-nsp] LLDP on LAG behaviour
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net

What version Junos? I ran into some LLDP bugs on early 11.4 versions. 

On Wed, Mar 13, 2013 at 5:45 AM, Riccardo S dim0...@hotmail.com wrote:








I’ve a couple of EX in VC configuration

mode (same Junos) connected with two links in LAG mode and LLDP switched-on on 
the

aggregate interface.



The problem is that on one VC I see

lldp neighborship with the other, on the other nothing…



Any idea ?







4200A# run show lldp neighbors



Local InterfaceParent InterfaceChassis Id  Port info  
System Name



ge-1/0/1.0 ae0.0   54:e0:31:32:bd:c0   ge-1/0/1.0 
4200B



{master:0}[edit protocols]







4200B# run show lldp neighbors



{master:0}[edit protocols]

Tks







___

juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] LLDP on LAG behaviour

2013-03-13 Thread Riccardo S



I’ve a couple of EX in VC configuration
mode (same Junos) connected with two links in LAG mode and LLDP switched-on on 
the
aggregate interface.

The problem is that on one VC I see
lldp neighborship with the other, on the other nothing…

Any idea ?

 

4200A# run show lldp neighbors

Local InterfaceParent InterfaceChassis Id  Port info  
System Name

ge-1/0/1.0 ae0.0   54:e0:31:32:bd:c0   ge-1/0/1.0 
4200B

{master:0}[edit protocols]

 

4200B# run show lldp neighbors 

{master:0}[edit protocols]
Tks


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Problem with EX4200-VC connected with LAG‏

2013-03-13 Thread Riccardo S

I’ve a couple of virtual chassis EX4200 (same Junos 11.4R5.7) connected with 
two links (now one is faulty) in LAG, scenario quite simple.

4200A ==  4200B

The following problem are experienced:

1) LAG comes up only when LACP is disabled, hence static config. 
From the “monitor traffic interface ge-xxx” no LACP packet on both VC is seen…

2) LLDP neighborship is visible only on 4200A and the LLDP configuration is 
identical on both sides:

4200A# run show configuration protocols lldp   
interface ae0.0;
{master:0}[edit]

4200B# run show configuration protocols lldp   
interface ae0.0;
{master:0}[edit]

4200A# run show lldp neighbors
Local InterfaceParent InterfaceChassis Id  Port info  
System Name
ge-1/0/1.0 ae0.0   54:e0:32:02:bc:c0   ge-1/0/1.0 
4200B
{master:0}[edit protocols]

4200B# run show lldp neighbors 
{master:0}[edit protocols]

3) On the LAG is configured also OAM with the following difference:

4200A# run show oam ethernet link-fault-management 
  Interface: ae0.0
Status: Running, Discovery state: Fault
  Interface: ge-1/0/1.0
Status: Running, Discovery state: Active Send Local
Peer address: 00:00:00:00:00:00
Flags:0x8
  Interface: ge-0/0/1.0
Status: Running, Discovery state: Active Send Local
Peer address: 00:00:00:00:00:00
Flags:0x8
Application profile statistics:
  Profile Name   Invoked Executed
  default  00

{master:0}[edit]
4200B# run show oam ethernet link-fault-management 
  Interface: ae0.0
Status: Running, Discovery state: Send Any
  Interface: ge-1/0/1.0
Status: Running, Discovery state: Send Any
Peer address: 00:15:ad:0b:cb:96
Flags:Remote-Stable Remote-State-Valid Local-Stable 0x50
Remote entity information:
  Remote MUX action: forwarding, Remote parser action: forwarding
  Discovery mode: passive, Unidirectional mode: unsupported
  Remote loopback mode: supported, Link events: supported
  Variable requests: supported
  Interface: ge-0/0/1.0
Status: Running, Discovery state: Fault
Peer address: 00:00:00:00:00:00
Flags:0x8
Application profile statistics:
  Profile Name   Invoked Executed
  default 25   25

{master:0}[edit]

4) BGP session between these VC sometimes flap but not error is received or 
interface down, I see only incremented the Invoked and Exectued action on the 
OAM display of 4200B.


I’ve the impression that the line is not so stable, but not all the point 
1-2-3-4) can be referenced only to the line….
Any idea or help very appreciated.

Tks   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] EX VC mixed mode experience

2013-03-04 Thread Riccardo S

Is there a way to have more details about PSN 2013-03-868 ?

Tks

From: nitzan.tzelni...@gmail.com
Date: Sat, 2 Mar 2013 21:06:07 +0200
Subject: Re: [j-nsp] EX VC mixed mode experience
To: a...@oasis-tech.net
CC: dim0...@hotmail.com; juniper-nsp@puck.nether.net

The case with the 4550 in VC is PSN 2013-03-868
Nitzan





On Fri, Mar 1, 2013 at 7:58 PM, Amos Rosenboim a...@oasis-tech.net wrote:


We have deployed a mixed mode 4500/4200 small VC as a part of mobile network 
core and it is running smoothly so far.

We don't have significant throughput, and we don't run any fancy features.

It's simply serves as L2 port extension for MX routers.



We have also tried to deploy mixed mode between 4550 and 4200 for an ISP and 
had serious issues with arp replies not being forwarded from the 4200 to hosts 
on the 4550.



JTAC are working on this and we rolled back to using the switches as standalone 
and interconnected the switches using a LAG.





Amos



Sent from my iPhone



On 1 Mar 2013, at 17:52, Riccardo S 
dim0...@hotmail.commailto:dim0...@hotmail.com wrote:







I'm wondering if anybody have info or experience in a scenario

with mixed virtual-chassis (6 equipments between 4500 and 4200) with high 
density

(more or less 70) of 10GBs ports.



Since these architecture should be placed in a

very sensitive position (servers and storage managing online systems of an 
important airport),

I'd like to have some informal feedbacks if any problems or whatelse have 
been experienced by somebody...



Why chose this design ? only costs...



Tks



___

juniper-nsp mailing list 
juniper-nsp@puck.nether.netmailto:juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp

___

juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] EX VC mixed mode experience

2013-03-01 Thread Riccardo S

I'm wondering if anybody have info or experience in a scenario
 with mixed virtual-chassis (6 equipments between 4500 and 4200) with high 
density 
(more or less 70) of 10GBs ports.

Since these architecture should be placed in a
 very sensitive position (servers and storage managing online systems of an 
important airport), 
I'd like to have some informal feedbacks if any problems or whatelse have 
been experienced by somebody...

Why chose this design ? only costs...

Tks
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper interop with Accedian

2013-02-26 Thread Riccardo S



Does
anybody have experience with Accedian (www.accedian.com) equipments 
interoperability with Juniper
equipments ?

I’m really
interested in experience having LAG (with LACP or OAM on top) configured 
between MX in virtual-chassis (customer
side) and Accedian (metro Ethernet provider side -- Metronode 10G).


We got the
statement by our provider that everything has been tested in lab but I can’t 
find any relevant
info on the www.


Very
appreciated any help (also offline) with pros and cons or whatelse.


Tks

  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Command to see multicast packet size

2013-02-11 Thread Riccardo S

Here it is:

 show route forwarding-table multicast destination 224.1.2.3 extensive
Routing table: default.inet [Index 0] 
Internet:

Destination:  224.1.2.3.1.2.3.4/64
  Route type: user  
  Route reference: 0   Route interface-index: 326 
  Flags: cached, check incoming interface , accounting, sent to PFE, rt nh 
decoupled  
  Next-hop type: indirect  Index: 1048583  Reference: 3
  Nexthop:  
  Next-hop type: composite Index: 686  Reference: 1
  Next-hop type: unicast   Index: 577  Reference: 3
  Next-hop interface: gr-1/1/0.11  
  Next-hop type: unicast   Index: 655  Reference: 4
  Next-hop interface: gr-1/1/0.165 
  Next-hop type: unicast   Index: 656  Reference: 3
  Next-hop interface: gr-1/1/0.179 
  Next-hop type: unicast   Index: 657  Reference: 4
  Next-hop interface: gr-1/1/0.639 

Routing table: default-switch.inet [Index 3] 
Internet:

Routing table: __master.anon__.inet [Index 4] 
Internet:

 

 show chassis fpc pic-status 
Slot 0   Online  
  PIC 0  Online   4x 10GE XFP
Slot 1   Online  
  PIC 0  Online   10x 1GE(LAN) SFP
  PIC 1  Online   10x 1GE(LAN) SFP
 

From: amo...@juniper.net
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] Command to see multicast packet size
Date: Sat, 9 Feb 2013 21:56:39 +







I think there is an (unsupported) way to pull directly that info. If you send 
me the output of:



show route forwarding-table multicast destination your_group_address extensive
show chassis fpc pic-status



I can tell you the next steps.



On Feb 8, 2013, at 3:54 PM, Riccardo S wrote:



ok this is not exactly the same as in IOS, this is an ACL with logging



hence not so immediate for any mcast flow...



Any other idea ?



Tks






From: amo...@juniper.net

To: dim0...@hotmail.com; juniper-nsp@puck.nether.net

Subject: RE: [j-nsp] Command to see multicast packet size

Date: Fri, 8 Feb 2013 14:48:02 +





You probably need a filter like this:

 

configure

set firewall family inet filter myFilter term myFlow from source-address 
1.2.3.4/32

set firewall family inet filter myFilter term myFlow from source-address 
224.1.2.3/32

set firewall family inet filter myFilter term myFlow then count myFlowCount

set firewall family inet filter myFilter term myFlow then accept

set firewall family inet filter myFilter term Rest then accept

set forwarding-options family inet filter input myFilter

commit and-quit

 

show firewall filter myFilter



 

 



From: Riccardo S [mailto:dim0...@hotmail.com] 

Sent: Friday, February 08, 2013 3:42 PM

To: Antonio Sanchez-Monge; juniper-nsp@puck.nether.net

Subject: RE: [j-nsp] Command to see multicast packet size



 


Hi

I saw it but no info is found related to the average pkt size...



 show multicast route extensive group 224.1.2.3

Group: 224.1.2.3

Source: 1.2.3.4/32 

Upstream interface: ae0.811

Downstream interface list: 

gr-1/1/0.1245

Session description: Unknown

Statistics: 0 kBps, 1 pps, 35231 packets

Next-hop ID: 1048577

Upstream protocol: PIM

Route state: Active

Forwarding state: Forwarding

Cache lifetime/timeout: 360 seconds

Wrong incoming interface notifications: 0

Uptime: 09:40:25



Tks






 From: amo...@juniper.net

 To: dim0...@hotmail.com; juniper-nsp@puck.nether.net

 Subject: RE: [j-nsp] Command to see multicast packet size

 Date: Fri, 8 Feb 2013 14:35:36 +

 

 Hi,

 

 show multicast route extensive

 

 Ato

 -Original Message-

 From: juniper-nsp-boun...@puck.nether.net 
 [mailto:juniper-nsp-boun...@puck.nether.net]
 On Behalf Of Riccardo S

 Sent: Friday, February 08, 2013 3:33 PM

 To: juniper-nsp@puck.nether.net

 Subject: [j-nsp] Command to see multicast packet size

 

 

 Hi

 is there in Junos the equivalent command as in IOS:

 

 show ip mroute x.x.x.x count

 

 where you can see the packet size of the flow ?

 

 

 Example

 

 #sh ip mroute 224.1.2.3 count

 IP Multicast Statistics

 1004 routes using 445962 bytes of memory

 152 groups, 5.60 average sources per group Forwarding Counts: Pkt Count/Pkts 
 per second/Avg Pkt Size/Kilobits per second Other counts: Total/RPF 
 failed/Other drops(OIF-null, rate-limit etc)

 

 Group: 224.1.2.3, Source count: 1, Packets forwarded: 34050, Packets 
 received: 34050

 RP-tree: Forwarding: 2/0/36/0, Other: 2/0/0

 Source: 1.2.3.4/32, Forwarding: 34048/1/53/0, Other: 34048/0/0

 

 Thanks

 

 ___

 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp

 












  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Command to see multicast packet size

2013-02-08 Thread Riccardo S

ok this is not exactly the same as in IOS, this is an ACL with logging

hence not so immediate for any mcast flow...

Any other idea ?

Tks

From: amo...@juniper.net
To: dim0...@hotmail.com; juniper-nsp@puck.nether.net
Subject: RE: [j-nsp] Command to see multicast packet size
Date: Fri, 8 Feb 2013 14:48:02 +









You probably need a filter like this:
 
configure
set firewall family inet filter myFilter term myFlow from source-address 
1.2.3.4/32

set firewall family inet filter myFilter term myFlow from source-address 
224.1.2.3/32

set firewall family inet filter myFilter term myFlow then count myFlowCount
set firewall family inet filter myFilter term myFlow then accept
set firewall family inet filter myFilter term Rest then accept
set forwarding-options family inet filter input myFilter
commit and-quit
 
show firewall filter myFilter

 
 


From: Riccardo S [mailto:dim0...@hotmail.com]


Sent: Friday, February 08, 2013 3:42 PM

To: Antonio Sanchez-Monge; juniper-nsp@puck.nether.net

Subject: RE: [j-nsp] Command to see multicast packet size


 

Hi

I saw it but no info is found related to the average pkt size...



 show multicast route extensive group 224.1.2.3

Group: 224.1.2.3

Source: 1.2.3.4/32 

Upstream interface: ae0.811

Downstream interface list: 

gr-1/1/0.1245

Session description: Unknown

Statistics: 0 kBps, 1 pps, 35231 packets

Next-hop ID: 1048577

Upstream protocol: PIM

Route state: Active

Forwarding state: Forwarding

Cache lifetime/timeout: 360 seconds

Wrong incoming interface notifications: 0

Uptime: 09:40:25



Tks





 From:
amo...@juniper.net

 To: dim0...@hotmail.com; 
juniper-nsp@puck.nether.net

 Subject: RE: [j-nsp] Command to see multicast packet size

 Date: Fri, 8 Feb 2013 14:35:36 +

 

 Hi,

 

 show multicast route extensive

 

 Ato

 -Original Message-

 From: juniper-nsp-boun...@puck.nether.net 
 [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Riccardo S

 Sent: Friday, February 08, 2013 3:33 PM

 To: juniper-nsp@puck.nether.net

 Subject: [j-nsp] Command to see multicast packet size

 

 

 Hi

 is there in Junos the equivalent command as in IOS:

 

 show ip mroute x.x.x.x count

 

 where you can see the packet size of the flow ?

 

 

 Example

 

 #sh ip mroute 224.1.2.3 count

 IP Multicast Statistics

 1004 routes using 445962 bytes of memory

 152 groups, 5.60 average sources per group Forwarding Counts: Pkt Count/Pkts 
 per second/Avg Pkt Size/Kilobits per second Other counts: Total/RPF 
 failed/Other drops(OIF-null, rate-limit etc)

 

 Group: 224.1.2.3, Source count: 1, Packets forwarded: 34050, Packets 
 received: 34050

 RP-tree: Forwarding: 2/0/36/0, Other: 2/0/0

 Source: 1.2.3.4/32, Forwarding: 34048/1/53/0, Other: 34048/0/0

 

 Thanks

 

 ___

 juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

 

 


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] OAM over LAG

2013-02-01 Thread Riccardo S




Does anybody have implemented OAM over eth LAG configured in EX4200 virtual 
chassis ??

I saw this link that seems related to MX and not EX



http://www.juniper.net/techpubs/en_US/junos11.1/topics/example/layer-2-802-1ah-ethernet-oam-lfm-example-for-aggregated-ethernet-mx-solutions.html


Some examples ?
How it works if one of the links coupling the LAG has problems (like crc, 
flapping, etc)

Do I need only LFM to declare the link down if with error ?

Tks



  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multicast IGMP filter

2013-01-20 Thread Riccardo S

I'm not able to force my customer...

I've tried the firewall filter and seems working, however it's not the same as 
to have via IGMP.

Tks   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Multicast IGMP filter

2013-01-17 Thread Riccardo S



Hi

I’ve
an IGMP filter applied to some interfaces and done in this way:

 

set
protocols igmp interface gr-0/0/0.11 group-policy IGMP-test-B

 

This
filter is needed to avoid the join report from the remote CPE for none group
otherwise not explicitly permitted. Here the policy:

 

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B from route-filter
239.239.239.239/32 exact

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B from route-filter
239.239.233.233/32 exact

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B then accept

set
policy-options policy-statement IGMP-test-B term LAST then reject

 

My
question is: my customer uses IGMPv3, can I also filter the source of the group 
to be permitted ?

 

I’ve
done in this way:

 

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B from route-filter
239.239.239.239/32 exact

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B from route-filter
239.239.233.233/32 exact

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B from 
source-address-filter
10.1.1.1 exact

set
policy-options policy-statement IGMP-test-B term PAYTV-FEED-B then accept

set
policy-options policy-statement IGMP-test-B term LAST then reject

 

But
is not working since my customer use IGMP v2 (I guesS), I always get the flow 
also from
other sources...


Is
there a way to filter the source with IGMPv2 ?


Or  is there another way to avoid my customer to
get the flow from a source not permitted ?


Any
advice ?

 

Tks

  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multicast flow with Wrong incoming interface notifications counter incrementing

2013-01-16 Thread Riccardo S

Just a follow-up.
It seems, beside of the fact that RPF seems not failing, that sometime the 
mcast flow
is reached by another vlan and we get a kernel mismatch.

Thanks to David for the help.

Tks   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multicast flow with Wrong incoming interface notifications counter incrementing

2013-01-15 Thread Riccardo S

As soon as I'll experience the issue again, I'll do...

Tks

 From: david@orange.com
 To: dim0...@hotmail.com; nkham...@juniper.net
 CC: juniper-nsp@puck.nether.net
 Date: Tue, 15 Jan 2013 09:19:47 +0100
 Subject: RE: [j-nsp] Multicast flow with Wrong incoming interface 
 notifications counter incrementing
 
 Could you send us the traces of show multicast stat 2 times separated with 
 10 sec. 
 
 Thank you 
  
 
 
  
 David Roy 
 IP/MPLS Support engineer - Orange France
 Ph. +33 2 99 87 64 72 - Mob. +33 6 85 52 22 13
 david@orange.com
  
 JNCIE-MT/SP #703 - JNCIE-ENT #305 - JNCIP-SEC
 
 -Message d'origine-
 De : juniper-nsp-boun...@puck.nether.net 
 [mailto:juniper-nsp-boun...@puck.nether.net] De la part de Riccardo S
 Envoyé : mardi 15 janvier 2013 09:14
 À : nkham...@juniper.net
 Cc : juniper-nsp@puck.nether.net
 Objet : Re: [j-nsp] Multicast flow with Wrong incoming interface 
 notifications counter incrementing
 
 
 Having also a look to Juniper documentation says:
 
 Wrong incoming interface notifications = Number of times that the upstream 
 interface was not available
 
 It doesn't sound as an RPF fail counter, isn't it ?
 
 Tks
 
  From: nkham...@juniper.net
  To: dim0...@hotmail.com
  CC: juniper-nsp@puck.nether.net
  Subject: Re: [j-nsp] Multicast flow with Wrong incoming interface 
  notifications counter incrementing
  Date: Mon, 14 Jan 2013 17:14:42 +
  
  Iif mismatch means the multicast traffic received for this (S,G) has a 
  different incoming interface than the one we programmed in the PFE (based 
  on RPF check). 
  
  Traffic would get discarded in this case and not forwarded. However, a 
  notification is sent to RPD/PIM on the RE to check if the we need to 
  change the iif on this (S,G) to a new iif (for e.g. if this leads to 
  RPT to SPT cutover or if there is a change in the iif due to RPF 
  interface change)
  
  If the counter increase consistently, then most likely you are not 
  forwarding on that (S,G) if 100% of the incoming traffic on this (S,G) get 
  discarded with iif mismatch. Alternately, you might be getting same (S,G) 
  traffic on 2 different iifs and in this case one coming over correct iif 
  gets forwarded and one over wrong iif gets discarded with iif mismatch 
  counted against that (S,G).
  
  Thanks,
  Nilesh. 
  
  
  Sent from my mobile device
  
  On Jan 14, 2013, at 8:14 AM, Riccardo S dim0...@hotmail.com wrote:
  
   
   
   
   Hi
   
   What does means incrementing counter Wrong incoming interface 
   notifications in the following command ?
   
   Is the following flow forwarded, as expected, towards the receiver ?
   
   
   
   
   
   # run show multicast route group
   224.12.22.1 extensive
   
   Instance: master Family: INET
   
   
   
   Group: 224.12.22.1
   
   
   Source: 1.1.1.1/32
   
   
   Upstream interface: vlan.1
   
   
   Downstream interface list: 
   
  vlan.100
   
   
   Session description: Unknown
   
   
   Statistics: 3 kBps, 30 pps, 43340 packets
   
   
   Next-hop ID: 131071
   
   
   Upstream protocol: PIM
   
   
   Route state: Active
   
   
   Forwarding state: Forwarding
   
   
   Cache lifetime/timeout: 360 seconds
   
   
   Wrong incoming interface notifications: 6960
   
   
   Uptime: 00:36:36
   
   
   
   After 10 seconds:
   
   
   
   # run show multicast route group
   224.12.22.1 extensive
   
   Instance: master Family: INET
   
   
   
   Group: 224.12.22.1
   
   
   Source: 1.1.1.1/32
   
   
   Upstream interface: vlan.1
   
   
   Downstream interface list: 
   
  vlan.100
   
   
   Session description: Unknown
   
   
   Statistics: 2 kBps, 30 pps, 43491 packets
   
   
   Next-hop ID: 131071
   
   
   Upstream protocol: PIM
   
   
   Route state: Active
   
   
   Forwarding state: Forwarding
   
   
   Cache lifetime/timeout: 360 seconds
   
   
   Wrong incoming interface notifications: 6974
   
   
   Uptime: 00:36:41
   Tks
   
   
   
   ___
   juniper-nsp mailing list juniper-nsp@puck.nether.net 
   https://puck.nether.net/mailman/listinfo/juniper-nsp
   
  
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 _
 
 Ce message et ses pieces jointes peuvent contenir des informations 
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
 ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
 electroniques etant susceptibles d'alteration,
 France Telecom - Orange decline toute responsabilite si ce message a ete 
 altere, deforme ou

[j-nsp] Multicast flow with Wrong incoming interface notifications counter incrementing

2013-01-14 Thread Riccardo S



Hi

What does means incrementing counter
“Wrong incoming interface notifications” in the following command ?

Is the following flow forwarded, as expected,
towards the receiver ?

 

 

# run show multicast route group
224.12.22.1 extensive

Instance: master Family: INET

 

Group: 224.12.22.1

   
Source: 1.1.1.1/32 

   
Upstream interface: vlan.1

   
Downstream interface list: 

vlan.100

   
Session description: Unknown

   
Statistics: 3 kBps, 30 pps, 43340 packets

   
Next-hop ID: 131071

   
Upstream protocol: PIM

   
Route state: Active

   
Forwarding state: Forwarding

   
Cache lifetime/timeout: 360 seconds

   
Wrong incoming interface notifications: 6960

   
Uptime: 00:36:36

 

After 10 seconds:

 

# run show multicast route group
224.12.22.1 extensive

Instance: master Family: INET

 

Group: 224.12.22.1

   
Source: 1.1.1.1/32 

   
Upstream interface: vlan.1

   
Downstream interface list: 

vlan.100

   
Session description: Unknown

   
Statistics: 2 kBps, 30 pps, 43491 packets

   
Next-hop ID: 131071

   
Upstream protocol: PIM

   
Route state: Active

   
Forwarding state: Forwarding

   
Cache lifetime/timeout: 360 seconds

   
Wrong incoming interface notifications: 6974

   
Uptime: 00:36:41
Tks


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper equivalent to Cisco 3800X

2013-01-02 Thread Riccardo S





Which is,
from your point of view, the equivalent model of Cisco 3800X metro ethernet ?


I’m
focusing on the following feature needed:

- 
At
least 24 gigaethernet ports 

- 
Full
BGP support

- 
Full
multicast support

- 
Ethernet-aggregation
(LAG)

- 
Full
layer2 capabilities


For the
decision takes sense also a cost analysis, as far as I can understand Cisco
3800X is more than 20k $ price list…


Tks

  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper equivalent to Cisco 3800X

2013-01-02 Thread Riccardo S

Hi Tim
indeed was Cisco directly telling me to use C3800X but I guess I know why

I was thinking to EX4200 or EX4550... 
MPLS is not needed.

Cisco 3400E has only 10/100 24 ports and 2 combo as far as I know...

Ric

Date: Wed, 2 Jan 2013 07:56:06 -0600
Subject: Re: [j-nsp] Juniper equivalent to Cisco 3800X
From: jackson@gmail.com
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net

Any of the EX3200/3300/4200 meet those requirements, but do not have the full 
MPLS suite that the 3800X will have, nor do they have the buffers that the 
3800X does. 
If that's all you want from a switch, the 3800X is overkill, maybe look at 
ME3400E vs EX4200/3300..

--Tim

On Wed, Jan 2, 2013 at 7:48 AM, Riccardo S dim0...@hotmail.com wrote:











Which is,

from your point of view, the equivalent model of Cisco 3800X metro ethernet ?





I’m

focusing on the following feature needed:



-

At

least 24 gigaethernet ports



-

Full

BGP support



-

Full

multicast support



-

Ethernet-aggregation

(LAG)



-

Full

layer2 capabilities





For the

decision takes sense also a cost analysis, as far as I can understand Cisco

3800X is more than 20k $ price list…





Tks





___

juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper equivalent to Cisco 3800X

2013-01-02 Thread Riccardo S




Hi Eric 
I guess 3750X could be an option.

By the way the use needed by this machine is a ethernet customer aggregator 
(hence many eth ports) with the need of BGP/PIM sessions for mcast 
redistribution from the core (hence needs of BGP/PIM but no MPLS).

I know EX4200 can support up to 128 BGP sessions, no matter about routing table 
dimension since is less than 1k.

Do you know how many BGP peers are supported by 3750X ?

Tks


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos device monitoring

2012-12-27 Thread Riccardo S

hi
back to a previous thread, if I'd like to monitor multicast flow having the 
details per single group (quantity of data sent or received) for MX. EX and 
SSG, what do you suggest to do ?

SNMP polling ?
Sflow ? Jflow ?
other ?

Pls share some config example if possible...

Tks   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos device monitoring

2012-12-14 Thread Riccardo S

hi ytti
any config example of 2 and 3 ?

Tks   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] R: Routing Instance BGP Full Routing High Memory persists

2012-12-01 Thread Riccardo S
With m7 the only useful action solving the issue was reboot...

sent with Android

Giuliano Medalha giuli...@wztech.com.br ha scritto:

People,

We are doing some BGP tests using routing-instances on MX5-T-DC routers.

We have created a routing-instance to receive 3 full routing inet.0 tables.

The master routing has 3 majors full routing tables too.

Before the tests we check the RE percentage of using memory ... using:

Router show chassis routing engine

It show 47% usage.

After the creation of the routing-instance (type virtual-router) - named
TEST  we could receive more 3 full routing tables on the TEST.inet.0 table

Using the same command is show to us:

Router show chassis routing engine

87% memory utilization

After we remove the routing-instance configuration with a commit command in
sequence ... it still shows the same 87% of memory utilization

It this procedure is correct ?Is it any update command to clear the
memory allocation without disrupting forwarding services ?

Can you please give me some feedback about it ?

Did anyone see it before ?

Thanks a lot,

Giuliano
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LAG on Ex4200 fiber + copper

2012-11-29 Thread Riccardo S

Mates
any other experience in regards on the initial question ?

Tks

Date: Wed, 28 Nov 2012 11:00:59 +0100
Subject: Re: [j-nsp] LAG on Ex4200 fiber + copper
From: dariu...@gmail.com
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net

As far as I know and according to all Juniper docs you can only use optical pic 
ports and of course the dedicated vc port for this
eg.
https://www.juniper.net/techpubs//en_US/junos/topics/example/virtual-chassis-ex4200-link-aggregation-over-extended-vcps.html

Regards,Darius

On Wed, Nov 28, 2012 at 10:28 AM, Riccardo S dim0...@hotmail.com wrote:







On ex-4200-24T

is it possible to create a LAG using a 1 Gb up-link fiber port + a 1 Gb copper

port ?





Any juniper.net

reference about that ?





Tks





___

juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp



  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LAG on Ex4200 fiber + copper

2012-11-29 Thread Riccardo S

Tks
I'd like to buy them after I'm aware it works... ;-)

@Darius, if you are able to try pls share the results... ;-)

tks

Date: Thu, 29 Nov 2012 08:27:50 -0500
Subject: Re: [j-nsp] LAG on Ex4200 fiber + copper
From: ja...@freedomnet.co.nz
To: dim0...@hotmail.com
CC: juniper-nsp@puck.nether.net

In theory it should be possible. The best thing todo is configure it and do a 
commit check and see what happens.


On Thu, Nov 29, 2012 at 3:07 AM, Riccardo S dim0...@hotmail.com wrote:



Mates

any other experience in regards on the initial question ?



Tks



Date: Wed, 28 Nov 2012 11:00:59 +0100

Subject: Re: [j-nsp] LAG on Ex4200 fiber + copper

From: dariu...@gmail.com

To: dim0...@hotmail.com

CC: juniper-nsp@puck.nether.net



As far as I know and according to all Juniper docs you can only use optical pic 
ports and of course the dedicated vc port for this

eg.

https://www.juniper.net/techpubs//en_US/junos/topics/example/virtual-chassis-ex4200-link-aggregation-over-extended-vcps.html




Regards,Darius



On Wed, Nov 28, 2012 at 10:28 AM, Riccardo S dim0...@hotmail.com wrote:















On ex-4200-24T



is it possible to create a LAG using a 1 Gb up-link fiber port + a 1 Gb copper



port ?











Any juniper.net



reference about that ?











Tks











___



juniper-nsp mailing list juniper-nsp@puck.nether.net



https://puck.nether.net/mailman/listinfo/juniper-nsp









___

juniper-nsp mailing list juniper-nsp@puck.nether.net

https://puck.nether.net/mailman/listinfo/juniper-nsp


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] R: Re: R: Re: Same multicast group from the source

2012-11-26 Thread Riccardo S

Just to let you know the following interesting feature from Cisco

http://www.cisco.com/en/US/docs/ios/ipmulti/configuration/guide/imc_serv_reflect.pdf

seems allows natting mcast-2-mcast, mcast-2-ucast and viceversa...

any experience with this ?

Tks

 CC: dim0...@hotmail.com; juniper-nsp@puck.nether.net
 From: st...@acm.org
 Subject: Re: R: Re: R: Re: [j-nsp] Same multicast group from the source
 Date: Thu, 22 Nov 2012 20:13:27 -0700
 To: ch...@chrishellberg.com
 
 Since it is IGMP v2, the receiver will send a (*,G) membership report. This 
 will cause traffic for both S1
 and S2 to be received and the receiver will have to ignore the unwanted 
 channel.
 
 My understanding of the OPs request was to be able to have the receiver 
 receive S1, and not receive S2. That can be accomplished by using the 
 ssm-map-policy to convert the (*,224.1.1.1) membership report to a 
 (1.1.1.1,224.1.1.1) PIM join.
 
 --Stacy
 
 
 On Nov 22, 2012, at 3:15 PM, ch...@chrishellberg.com wrote:
 
  Almost right. The bit that ssm mapping does is take a igmpv2 group reports 
  and maps them into a PIM-SSM towards the source(s). Thus no need for an 
  RPas if the request were igmpv3
  
  Been years since I've looked at this but from memory you don't even need to 
  do ssm mapping since the rp should join both sources and you'll get a 
  multiplexed stream, which is what you want. If I understand correctly.
  
  /chris
  ---
  
  -Original Message-
  From: Riccardo S dim0...@hotmail.com
  Date: Thu, 22 Nov 2012 21:47:24 
  To: ch...@chrishellberg.com; Stacy W. Smithst...@acm.org
  Cc: juniper-nsp@puck.nether.net
  Subject: R: Re: R: Re: [j-nsp] Same multicast group from the source
  
  As far as i know ssm works with igmp3
  With igmp2 you need to go through rp...
  
  sent with Android
  
  ch...@chrishellberg.com ha scritto:
  
  Yes it does
  ---
  
  -Original Message-
  From: Riccardo S dim0...@hotmail.com
  Date: Thu, 22 Nov 2012 20:56:43 
  To: Stacy W. Smithst...@acm.org
  Cc: ch...@chrishellberg.com; juniper-nsp@puck.nether.net
  Subject: R: Re: [j-nsp] Same multicast group from the source
  
  I cannot force my customer to use igmpv3
  Do not think it works with igmpv2
  
  sent with Android
  
  Stacy W. Smith st...@acm.org ha scritto:
  
  You might want to look at ssm-map-policy.
  
  http://www.juniper.net/techpubs/en_US/junos12.2/topics/topic-map/multicast-ssm-map-for-different-groups-to-different-sources.html
  
  I think it meets your needs, but depending on how dynamic your customer's 
  requirements are, it may be cumbersome to maintain.
  
  Here's an example:
  
  [edit]
  user@host# show protocols igmp 
  interface ge-0/0/0.0 {
   ssm-map-policy S1-only;
  }
  interface ge-0/0/1.0 {
   ssm-map-policy S1-and-S2;
  }
  interface ge-0/0/2.0 {
   ssm-map-policy S2-only;
  }
  
  [edit]
  user@host# show policy-options 
  policy-statement S1-and-S2 {
   term 1 {
   from {
   route-filter 224.1.1.1/32 exact;
   }
   then {
   ssm-source [ 1.1.1.1 2.2.2.2 ];
   accept;
   }
   }
  }
  policy-statement S1-only {
   term 1 {
   from {
   route-filter 224.1.1.1/32 exact;
   }
   then {
   ssm-source 1.1.1.1;
   accept;
   }
   }
  }
  policy-statement S2-only {
   from {  
   route-filter 224.1.1.1/32 exact;
   }
   then {
   ssm-source 2.2.2.2;
   accept;
   }
  }
  
  
  
  On Nov 22, 2012, at 9:54 AM, Riccardo S dim0...@hotmail.com wrote:
  
  no failover, are completely different data...
  
  Let's imagine we're talking of TV broadcasting and my customer wants 
  receive today only channel S1 but tomorrow also, maybe, channel S2...
  
  Ric
  
  Date: Thu, 22 Nov 2012 17:51:07 +0100
  From: ch...@chrishellberg.com
  To: dim0...@hotmail.com
  CC: juniper-nsp@puck.nether.net
  Subject: Re: [j-nsp] Same multicast group from the source
  
  It depends on what you want to do. Should the receiver receive both 
  sources simultaneously and demultiplex the two? Or do you need failover 
  of some kind?
  
  /Chris
  
  On 22/11/12 5:40 PM, Riccardo S wrote:
  
  
  
  Hi expert team,
  
  I’ve a generic multicast question.
  
  
  
  Let’s say I’ve two different source
  sending different multicast traffic over the same multicast group, how 
  can I
  solve the problem for a customer receiver ?
  
  
  
  S1=1.1.1.1 G1=224.1.1.1
  
  S2=2.2.2.2 G2=224.1.1.1
  
  
  
  Customer receiver is usually running
  IGMPv2 and I cannot force to use IGMPv3.
  
  
  
  Network in the middle between source
  and receiver is not MPLS.
  
  
  
  I’ve thought to NAT multicast group,
  but I’m not sure it works and if anyone has already implemented it…
  
  
  
  Any different solution is
  appreciated…
  
  Tks
  
  Ric
  
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] R: Re: R: Re: Same multicast group from the source

2012-11-23 Thread Riccardo S

Thanks both
I't's more clear now

Ric

 CC: dim0...@hotmail.com; juniper-nsp@puck.nether.net
 From: st...@acm.org
 Subject: Re: R: Re: R: Re: [j-nsp] Same multicast group from the source
 Date: Thu, 22 Nov 2012 20:13:27 -0700
 To: ch...@chrishellberg.com
 
 Since it is IGMP v2, the receiver will send a (*,G) membership report. This 
 will cause traffic for both S1
 and S2 to be received and the receiver will have to ignore the unwanted 
 channel.
 
 My understanding of the OPs request was to be able to have the receiver 
 receive S1, and not receive S2. That can be accomplished by using the 
 ssm-map-policy to convert the (*,224.1.1.1) membership report to a 
 (1.1.1.1,224.1.1.1) PIM join.
 
 --Stacy
 
 
 On Nov 22, 2012, at 3:15 PM, ch...@chrishellberg.com wrote:
 
  Almost right. The bit that ssm mapping does is take a igmpv2 group reports 
  and maps them into a PIM-SSM towards the source(s). Thus no need for an 
  RPas if the request were igmpv3
  
  Been years since I've looked at this but from memory you don't even need to 
  do ssm mapping since the rp should join both sources and you'll get a 
  multiplexed stream, which is what you want. If I understand correctly.
  
  /chris
  ---
  
  -Original Message-
  From: Riccardo S dim0...@hotmail.com
  Date: Thu, 22 Nov 2012 21:47:24 
  To: ch...@chrishellberg.com; Stacy W. Smithst...@acm.org
  Cc: juniper-nsp@puck.nether.net
  Subject: R: Re: R: Re: [j-nsp] Same multicast group from the source
  
  As far as i know ssm works with igmp3
  With igmp2 you need to go through rp...
  
  sent with Android
  
  ch...@chrishellberg.com ha scritto:
  
  Yes it does
  ---
  
  -Original Message-
  From: Riccardo S dim0...@hotmail.com
  Date: Thu, 22 Nov 2012 20:56:43 
  To: Stacy W. Smithst...@acm.org
  Cc: ch...@chrishellberg.com; juniper-nsp@puck.nether.net
  Subject: R: Re: [j-nsp] Same multicast group from the source
  
  I cannot force my customer to use igmpv3
  Do not think it works with igmpv2
  
  sent with Android
  
  Stacy W. Smith st...@acm.org ha scritto:
  
  You might want to look at ssm-map-policy.
  
  http://www.juniper.net/techpubs/en_US/junos12.2/topics/topic-map/multicast-ssm-map-for-different-groups-to-different-sources.html
  
  I think it meets your needs, but depending on how dynamic your customer's 
  requirements are, it may be cumbersome to maintain.
  
  Here's an example:
  
  [edit]
  user@host# show protocols igmp 
  interface ge-0/0/0.0 {
   ssm-map-policy S1-only;
  }
  interface ge-0/0/1.0 {
   ssm-map-policy S1-and-S2;
  }
  interface ge-0/0/2.0 {
   ssm-map-policy S2-only;
  }
  
  [edit]
  user@host# show policy-options 
  policy-statement S1-and-S2 {
   term 1 {
   from {
   route-filter 224.1.1.1/32 exact;
   }
   then {
   ssm-source [ 1.1.1.1 2.2.2.2 ];
   accept;
   }
   }
  }
  policy-statement S1-only {
   term 1 {
   from {
   route-filter 224.1.1.1/32 exact;
   }
   then {
   ssm-source 1.1.1.1;
   accept;
   }
   }
  }
  policy-statement S2-only {
   from {  
   route-filter 224.1.1.1/32 exact;
   }
   then {
   ssm-source 2.2.2.2;
   accept;
   }
  }
  
  
  
  On Nov 22, 2012, at 9:54 AM, Riccardo S dim0...@hotmail.com wrote:
  
  no failover, are completely different data...
  
  Let's imagine we're talking of TV broadcasting and my customer wants 
  receive today only channel S1 but tomorrow also, maybe, channel S2...
  
  Ric
  
  Date: Thu, 22 Nov 2012 17:51:07 +0100
  From: ch...@chrishellberg.com
  To: dim0...@hotmail.com
  CC: juniper-nsp@puck.nether.net
  Subject: Re: [j-nsp] Same multicast group from the source
  
  It depends on what you want to do. Should the receiver receive both 
  sources simultaneously and demultiplex the two? Or do you need failover 
  of some kind?
  
  /Chris
  
  On 22/11/12 5:40 PM, Riccardo S wrote:
  
  
  
  Hi expert team,
  
  I’ve a generic multicast question.
  
  
  
  Let’s say I’ve two different source
  sending different multicast traffic over the same multicast group, how 
  can I
  solve the problem for a customer receiver ?
  
  
  
  S1=1.1.1.1 G1=224.1.1.1
  
  S2=2.2.2.2 G2=224.1.1.1
  
  
  
  Customer receiver is usually running
  IGMPv2 and I cannot force to use IGMPv3.
  
  
  
  Network in the middle between source
  and receiver is not MPLS.
  
  
  
  I’ve thought to NAT multicast group,
  but I’m not sure it works and if anyone has already implemented it…
  
  
  
  Any different solution is
  appreciated…
  
  Tks
  
  Ric
  
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
  
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] Same multicast group from the source

2012-11-22 Thread Riccardo S



Hi expert team,

I’ve a generic multicast question.

 

Let’s say I’ve two different source
sending different multicast traffic over the same multicast group, how can I
solve the problem for a customer receiver ?

 

S1=1.1.1.1 G1=224.1.1.1

S2=2.2.2.2 G2=224.1.1.1

 

Customer receiver is usually running
IGMPv2 and I cannot force to use IGMPv3.

 

Network in the middle between source
and receiver is not MPLS.

 

I’ve thought to NAT multicast group,
but I’m not sure it works and if anyone has already implemented it…

 

Any different solution is
appreciated…

Tks

Ric
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Same multicast group from the source

2012-11-22 Thread Riccardo S

no failover, are completely different data...

Let's imagine we're talking of TV broadcasting and my customer wants receive 
today only channel S1 but tomorrow also, maybe, channel S2...

Ric

 Date: Thu, 22 Nov 2012 17:51:07 +0100
 From: ch...@chrishellberg.com
 To: dim0...@hotmail.com
 CC: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] Same multicast group from the source
 
 It depends on what you want to do. Should the receiver receive both 
 sources simultaneously and demultiplex the two? Or do you need failover 
 of some kind?
 
 /Chris
 
 On 22/11/12 5:40 PM, Riccardo S wrote:
 
 
 
  Hi expert team,
 
  I’ve a generic multicast question.
 
 
 
  Let’s say I’ve two different source
  sending different multicast traffic over the same multicast group, how can I
  solve the problem for a customer receiver ?
 
 
 
  S1=1.1.1.1 G1=224.1.1.1
 
  S2=2.2.2.2 G2=224.1.1.1
 
 
 
  Customer receiver is usually running
  IGMPv2 and I cannot force to use IGMPv3.
 
 
 
  Network in the middle between source
  and receiver is not MPLS.
 
 
 
  I’ve thought to NAT multicast group,
  but I’m not sure it works and if anyone has already implemented it…
 
 
 
  Any different solution is
  appreciated…
 
  Tks
 
  Ric
  
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] R: Re: Same multicast group from the source

2012-11-22 Thread Riccardo S
I cannot force my customer to use igmpv3
Do not think it works with igmpv2

sent with Android

Stacy W. Smith st...@acm.org ha scritto:

You might want to look at ssm-map-policy.

http://www.juniper.net/techpubs/en_US/junos12.2/topics/topic-map/multicast-ssm-map-for-different-groups-to-different-sources.html

I think it meets your needs, but depending on how dynamic your customer's 
requirements are, it may be cumbersome to maintain.

Here's an example:

[edit]
user@host# show protocols igmp 
interface ge-0/0/0.0 {
ssm-map-policy S1-only;
}
interface ge-0/0/1.0 {
ssm-map-policy S1-and-S2;
}
interface ge-0/0/2.0 {
ssm-map-policy S2-only;
}

[edit]
user@host# show policy-options 
policy-statement S1-and-S2 {
term 1 {
from {
route-filter 224.1.1.1/32 exact;
}
then {
ssm-source [ 1.1.1.1 2.2.2.2 ];
accept;
}
}
}
policy-statement S1-only {
term 1 {
from {
route-filter 224.1.1.1/32 exact;
}
then {
ssm-source 1.1.1.1;
accept;
}
}
}
policy-statement S2-only {
from {  
route-filter 224.1.1.1/32 exact;
}
then {
ssm-source 2.2.2.2;
accept;
}
}



On Nov 22, 2012, at 9:54 AM, Riccardo S dim0...@hotmail.com wrote:
 
 no failover, are completely different data...
 
 Let's imagine we're talking of TV broadcasting and my customer wants receive 
 today only channel S1 but tomorrow also, maybe, channel S2...
 
 Ric
 
 Date: Thu, 22 Nov 2012 17:51:07 +0100
 From: ch...@chrishellberg.com
 To: dim0...@hotmail.com
 CC: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] Same multicast group from the source
 
 It depends on what you want to do. Should the receiver receive both 
 sources simultaneously and demultiplex the two? Or do you need failover 
 of some kind?
 
 /Chris
 
 On 22/11/12 5:40 PM, Riccardo S wrote:
 
 
 
 Hi expert team,
 
 I’ve a generic multicast question.
 
 
 
 Let’s say I’ve two different source
 sending different multicast traffic over the same multicast group, how can 
 I
 solve the problem for a customer receiver ?
 
 
 
 S1=1.1.1.1 G1=224.1.1.1
 
 S2=2.2.2.2 G2=224.1.1.1
 
 
 
 Customer receiver is usually running
 IGMPv2 and I cannot force to use IGMPv3.
 
 
 
 Network in the middle between source
 and receiver is not MPLS.
 
 
 
 I’ve thought to NAT multicast group,
 but I’m not sure it works and if anyone has already implemented it…
 
 
 
 Any different solution is
 appreciated…
 
 Tks
 
 Ric

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] R: Re: R: Re: Same multicast group from the source

2012-11-22 Thread Riccardo S
As far as i know ssm works with igmp3
With igmp2 you need to go through rp...

sent with Android

ch...@chrishellberg.com ha scritto:

Yes it does
---

-Original Message-
From: Riccardo S dim0...@hotmail.com
Date: Thu, 22 Nov 2012 20:56:43 
To: Stacy W. Smithst...@acm.org
Cc: ch...@chrishellberg.com; juniper-nsp@puck.nether.net
Subject: R: Re: [j-nsp] Same multicast group from the source

I cannot force my customer to use igmpv3
Do not think it works with igmpv2

sent with Android

Stacy W. Smith st...@acm.org ha scritto:

You might want to look at ssm-map-policy.

http://www.juniper.net/techpubs/en_US/junos12.2/topics/topic-map/multicast-ssm-map-for-different-groups-to-different-sources.html

I think it meets your needs, but depending on how dynamic your customer's 
requirements are, it may be cumbersome to maintain.

Here's an example:

[edit]
user@host# show protocols igmp 
interface ge-0/0/0.0 {
ssm-map-policy S1-only;
}
interface ge-0/0/1.0 {
ssm-map-policy S1-and-S2;
}
interface ge-0/0/2.0 {
ssm-map-policy S2-only;
}

[edit]
user@host# show policy-options 
policy-statement S1-and-S2 {
term 1 {
from {
route-filter 224.1.1.1/32 exact;
}
then {
ssm-source [ 1.1.1.1 2.2.2.2 ];
accept;
}
}
}
policy-statement S1-only {
term 1 {
from {
route-filter 224.1.1.1/32 exact;
}
then {
ssm-source 1.1.1.1;
accept;
}
}
}
policy-statement S2-only {
from {  
route-filter 224.1.1.1/32 exact;
}
then {
ssm-source 2.2.2.2;
accept;
}
}



On Nov 22, 2012, at 9:54 AM, Riccardo S dim0...@hotmail.com wrote:
 
 no failover, are completely different data...
 
 Let's imagine we're talking of TV broadcasting and my customer wants 
 receive today only channel S1 but tomorrow also, maybe, channel S2...
 
 Ric
 
 Date: Thu, 22 Nov 2012 17:51:07 +0100
 From: ch...@chrishellberg.com
 To: dim0...@hotmail.com
 CC: juniper-nsp@puck.nether.net
 Subject: Re: [j-nsp] Same multicast group from the source
 
 It depends on what you want to do. Should the receiver receive both 
 sources simultaneously and demultiplex the two? Or do you need failover 
 of some kind?
 
 /Chris
 
 On 22/11/12 5:40 PM, Riccardo S wrote:
 
 
 
 Hi expert team,
 
 I’ve a generic multicast question.
 
 
 
 Let’s say I’ve two different source
 sending different multicast traffic over the same multicast group, how 
 can I
 solve the problem for a customer receiver ?
 
 
 
 S1=1.1.1.1 G1=224.1.1.1
 
 S2=2.2.2.2 G2=224.1.1.1
 
 
 
 Customer receiver is usually running
 IGMPv2 and I cannot force to use IGMPv3.
 
 
 
 Network in the middle between source
 and receiver is not MPLS.
 
 
 
 I’ve thought to NAT multicast group,
 but I’m not sure it works and if anyone has already implemented it…
 
 
 
 Any different solution is
 appreciated…
 
 Tks
 
 Ric
   
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
   
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] PIM Sparse issues on EX4200

2012-11-21 Thread Riccardo S

You are getting issues to obtain sparse mode PIM but configured dense mode ???

 interface all {
 mode dense;

pls try
set protocol pim interface all sparse-dense

HTH

 From: maur...@three6five.com
 Date: Wed, 21 Nov 2012 20:27:54 +0200
 To: juniper-nsp@puck.nether.net
 Subject: [j-nsp] PIM Sparse issues on EX4200
 
 Hi
 
 I am having issues getting sparse mode PIM to work on a EX4200 that has been 
 set up as the RP.
 
 Here is the relevant Multicast config:
 --
 root@led-csw3 show configuration protocols 
 igmp {
 interface all {
 version 2;
 }
 }
 pim {
 rp {
 local {
 address 10.172.12.14;
 group-ranges {
 224.0.0.0/4;
 }
 }
 static {
 address 10.172.12.14 {
 group-ranges {
 224.0.0.0/4;
 }
 }
 }
 }
 interface all {
 mode dense;
 }
 interface me0.0 {
 disable;
 }
 }
 --
 
 It's currently set to dense mode as this is the only way that clients are 
 receiving traffic...
 
 From the EX I can see that there are IGMP-snooping membership requests:
 --
 root@led-csw3 show igmp-snooping membership detail 
 VLAN: IPTV_Clients Tag: 110 (Index: 2)
 Router interfaces:
 ge-0/0/0.0 dynamic Uptime: 02:04:32 timeout: 254
 ge-0/0/1.0 dynamic Uptime: 02:04:32 timeout: 250
Group: 224.0.1.40
  ge-0/0/0.0 timeout: 125 Last reporter: 172.16.0.3 Receiver count: 1, 
 Flags: V2-hosts
  ge-0/0/1.0 timeout: 251 Last reporter: 172.16.0.4 Receiver count: 1, 
 Flags: V2-hosts
Group: 224.0.1.60
  ge-0/0/1.0 timeout: 252 Last reporter: 172.16.6.153 Receiver count: 2, 
 Flags: V2-hosts
  ge-0/0/0.0 timeout: 191 Last reporter: 172.16.4.178 Receiver count: 1, 
 Flags: V2-hosts
Group: 224.0.1.178
  ge-0/0/0.0 timeout: 127 Last reporter: 172.16.5.34 Receiver count: 1, 
 Flags: V2-hosts
Group: 224.0.124.124
  ge-0/0/0.0 timeout: 258 Last reporter: 172.16.1.52 Receiver count: 6, 
 Flags: V2-hosts
  ge-0/0/1.0 timeout: 248 Last reporter: 172.16.1.149 Receiver count: 7, 
 Flags: V2-hosts
 --
 
 
 There are multicast sources sending traffic:
 --
 root@led-csw3 show pim join extensive
 Instance: PIM.master Family: INET
 R = Rendezvous Point Tree, S = Sparse, W = Wildcard
 
 Group: 239.250.0.110
 Source: 192.168.199.3
 Flags: dense
 Upstream interface: vlan.7
 Upstream neighbor: Direct
 Downstream interfaces:
 vlan.6
 
 Group: 239.250.0.112
 Source: 192.168.199.3
 Flags: dense
 Upstream interface: vlan.7
 Upstream neighbor: Direct
 Downstream interfaces:
 vlan.6
 
 Group: 239.250.0.120
 Source: 192.168.199.3
 Flags: dense
 Upstream interface: vlan.7
 Upstream neighbor: Direct
 Downstream interfaces:
 vlan.6
 --
 
 Yet there are no groups registered with the RP:
 --
 root@led-csw3 show pim rps extensive 
 Instance: PIM.master
 Address family INET
 
 RP: 10.172.12.14
 Learned via: static configuration
 Time Active: 00:18:54
 Holdtime: 0
 Device Index: 26
 Subunit: 32769
 Interface: pimd.32769
 Group Ranges:
 224.0.0.0/4
 
 Address family INET6
 
 {master:0}
 --
 
 As a result when I change to Dense mode everything breaks...
 
 Has any one seen this before?
 
 In addition there are now some Cisco switches hanging off the juniper with 
 igmp-snooping set up and it is handling the sending of traffic to hosts that 
 actually need it...
 
 Mauritz 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX5: PIM bootstrap-rp not learned

2012-11-08 Thread Riccardo S




Hi

I’ve an MX5 peering PIM with a Cisco
6500, Cisco 6500 is also acting as RP with bootstrap-rp.


Unfortunately my MX5 is not dynamically
learning the rp ip address , do I’ve to specify something to do that in MX5 ?


I wondering since with PIM peering I
expect it working without any specific command…

 

If I specify static rp everything is
ok….

 

Any idea ?

 

Tks


  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] GRE keepalive between Junos and IOS

2012-11-05 Thread Riccardo S



Hi

I’ve some
GRE tunnels configured on my MX5 and Cisco box on the other point of the tunnel.


Any chance
to get keepalive configured on GRE without OAM ?


Could you
pls provide some example of working config with keepalive on GRE for both Junos
and IOS ?


Tks

  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] GRE keepalive between Junos and IOS

2012-11-05 Thread Riccardo S

A bit more



Also using OAM
is not working, OAM detects interface status down


# run show
oam gre-keepalive interface-name gr-1/1/0.11


Interface
name   Sent   Received   Status

  gr-1/1/0.11   622 622 tunnel-oam-up


# run show
oam gre-keepalive interface-name gr-1/1/0.165  


Interface
name   Sent   Received   Status

  gr-1/1/0.165  472 0   tunnel-oam-down

 

but it
remains up…


gr-1/1/0upup  

gr-1/1/0.11
upup  
inet 10.1.1.73/30  

gr-1/1/0.165upup  
inet 10.1.33.73/30


Any idea ?


Tks




 From: dim0...@hotmail.com
 To: juniper-nsp@puck.nether.net
 Date: Mon, 5 Nov 2012 10:53:46 +
 Subject: [j-nsp] GRE keepalive between Junos and IOS
 
 
 
 
 Hi
 
 I’ve some
 GRE tunnels configured on my MX5 and Cisco box on the other point of the 
 tunnel.
 
 
 Any chance
 to get keepalive configured on GRE without OAM ?
 
 
 Could you
 pls provide some example of working config with keepalive on GRE for both 
 Junos
 and IOS ?
 
 
 Tks
 
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp