Re: [j-nsp] igmp snooping layer 2 querier breaks ospf in other devices

2024-02-03 Thread nebu thomas via juniper-nsp
 Hi Aaron,
 since ACX series  is based on Broadcom PFE , and based on your description of 
the issue..

=> When you enable igmp-snooping (in Broadcom based PFE devices like 
EX3400/QFX5100s) there are some associated dynamic filters / IFP/VFP  gets 
created in PFE , and if there are some descrepancies in that it can match some 
unassociated multicast potentially dropping it . 

=> But if this is a bug in these areas  , you should be seeing the same 
behaviour in your lab with exact configs/topology (in same release)
So, in your lab , please try with exact same configuration as in your 
production device ..Especially ,Pls use the firewall filters configured in the 
production device and your lab device and make it same   (loopback filters ) .
-thanks, Nebu, 

On Friday, 2 February, 2024 at 11:00:40 pm IST, Aaron Gould via juniper-nsp 
 wrote:  
 
 thanks for this... i think i misunderstood the use of l2-querier from a 
previous project i worked on, and put it here where i really didn't need 
it.  moving forward i will only use igmp snooping in the vlan, and not 
the l2-querier option.  but with all that said, i still don't understand 
why ospf inside an l2circuit is affected by my pim/igmp configs ... 
furthermore, why it breaks in the field and works in the lab


-Aaron


On 2/2/2024 10:32 AM, Crist Clark wrote:
> I thought this was asked, but don’t recall an answer, what’s the point 
> of turning on a querier if the switch is already a PIM router? You 
> don’t need an IGMP snooping querier if it’s a multicast router.
>
>
> On Fri, Feb 2, 2024 at 8:21 AM Aaron Gould via juniper-nsp 
>  wrote:
>
>    I tried to recreate the scenario in my lab with no success
>
>    21.2R3-S4.8 - in lab - problem not seen
>    20.2R3-S7.3 - in lab - problem not seen
>    19.2R3-S6.1 - in lab - problem not seen
>    18.3R3-S6.1 - in lab - problem not seen
>    17.4R2-S11  - in lab - problem not seen
>
>    17.4R2-S11  - in field - problem seen
>
>
>    again, the problem is, when i enabled this command...
>
>    set protocols igmp-snooping vlan vlan100 l2-querier source-address
>    10.100.4.1
>
>    ...a customer riding an l2circuit on ge-0/0/2 report to me that their
>    multicast stops working... ospf goes down and stays in INIT...
>
>    when i remove all pim and igmp, then there OSPF neighbors up and
>    stabilizes
>
>    i just don't know how running igmp inside vlan 100 with ports
>    ge-0/0/4,
>    5 and 6 would have anything to do with an l2circuit on ge-0/0/2
>
>
>    -Aaron
>
>    ___
>    juniper-nsp mailing list juniper-nsp@puck.nether.net
>    https://puck.nether.net/mailman/listinfo/juniper-nsp
>
-- 
-Aaron
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
  
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 routing engine filter eats customer l2circuit packets

2017-01-17 Thread nebu thomas via juniper-nsp
Hi Chris ,
Good to know that this is working for you as expected in D40 .There is an 
"internal" PR mentioning a case  wrt  "specific payload parsing  " in L2ckt 
case .This issue  is addressed via that PR in D40 and hence it works .
 thanks, Nebu.

 

  From: Chris Wopat 
 To: "juniper-nsp@puck.nether.net"  
Cc: nebu thomas 
 Sent: Wednesday, January 18, 2017 4:08 AM
 Subject: Re: [j-nsp] QFX5100 routing engine filter eats customer l2circuit 
packets
   
On 01/14/2017 02:25 AM, nebu thomas wrote:
> Hi Chris,
> Per your email , I understand it is this specific payload coming thru the 
> L2ckt which is reporting this issue .
>
> Hence my earlier suggestion to test with14.1X53-D40 , and verify whether it 
> helps in your case .

We were able to do some lab testing on D40 today and it is indeed acting 
quite differently than D35 was.

* Previously 'monitor traffic interface ' would 
consistently show some types of the tunneled traffic hitting the RE 
(eigrp, ospf were tested).

* On D35 I could make many adjustments to the QFX's lo0 filter to get 
that traffic to drop. on D40 I am no longer able to, as expected.

Interesting as there were no fixes listed related to this. Perhaps the 
"LDP on IRB" change up also fixed this behavior.

If you (or anyone here?) is aware of the PR# related to this, I'd love 
to know what it was.

--Chris



   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] QFX5100 routing engine filter eats customer l2circuit packets

2017-01-14 Thread nebu thomas via juniper-nsp
Hi Chris,
Per your email , I understand it is this specific payload coming thru the L2ckt 
which is reporting this issue .
<<<>>>
So here the issue  you are seeing is when these particular pkts are the" 
payload " of L2ckt .
Hence my earlier suggestion to test with14.1X53-D40 , and verify whether it 
helps in your case .
--Thanks ,  Nebu.





  From: Chris Wopat 
 To: juniper-nsp@puck.nether.net 
 Sent: Friday, January 13, 2017 10:15 PM
 Subject: Re: [j-nsp] QFX5100 routing engine filter eats customer l2circuit 
packets
   
On 01/13/2017 12:03 AM, nebuvtho...@yahoo.com  wrote:
 > Hi Chris ,
 > could you pls test with 14.1x53-D40 in QFX5100 and let know the
 > outcome  .
 > Thanks ,Nebu V Thomas .

We do intend to test D40, are you aware of anything specific in this 
release that may address this?

We were also pointed to JSA10748 which clearly states:

    "Chipset on EX4300, EX4600, QFX3500, QFX5100 platforms
    sets destination port as CPU port even for transit
    multicast packets"


The rest of this JSA however is written quite confusingly and I'm not 
sure what to make of it.

It initially implies that there's an inherent 'accept all multicast' 
hidden term:

    "OSPF packets are making it to CPU due to implicit rule
    to allow IP reserved Multicast packets which is placed
    before last discard term"


but then later seems to contradict itself with:

    "QFX5100 platforms chipset's action resolution engine,
    Discard wins over any action and hence in the absence of
    implicit term to allow reserved multicast packets"


It is quite clear that transit multicast packets of some types (only 
IANA Reserved?) are punted to RE filter.

However, doing something logical like placing this term before the final 
'discard' term doesn't seem to fix it up:

    term multicast-accept {
        from {
            destination-address {
        224.0.0.0/4;
            }
        }
        then {
            count multicast-accept;
            accept;
        }
    }

If anyone has a concise explanation of what it's doing, I'm sure we can 
craft a proper filter, hopefully with a default action of 'discard' and 
not 'accept'.

--Chris



   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] QFX5100 routing engine filter eats customer l2circuit packets

2017-01-12 Thread nebu thomas via juniper-nsp
Hi Chris ,
could you pls test with 14.1x53-D40 in QFX5100 and let know the outcome  .
Thanks ,Nebu V Thomas . 


  From: Chris Wopat 
 To: "juniper-nsp@puck.nether.net"  
 Sent: Thursday, January 12, 2017 8:14 PM
 Subject: [j-nsp] QFX5100 routing engine filter eats customer l2circuit packets
   
We deployed QFX5100 w/ 14.1X53-D35 fairly recently with basic features
(v4/v6/bgp/ospf). These replaced some EX4200s in a similar role with little
issue.

We recently enabled LDP on the QFX to enable l2circuits for customers.
Config-wise, this was fine, known caveats of using native routing on unit 0
(although 14.1X53-D40 seems to have come up with a workaround for LDP to
function on IRB).

A few customers had a variety of issues, circuits would only pass traffic
successfully when we dropped our lo0 filter on the qfx. Filter in question
has a default discard action.

Example: Customer running EIGRP on their equipment, hello packets
(224.0.0.10) would ingress on QFX l2circuit ethernet-ccc interface and not
egress on the far end. As a test, we also had them set static EIGRP
neighbors (Unicast) but they had the same issue.

If we drop lo0 input filter OR craft it differently to be overly accepting,
things work.

Logs on the firewall filter term indicate that the packets in question show
up as mac addresses with protocol 8847 (mpls unicast ethertype?). The macs
below are Src: LDP neighbor, Dst: the QFX in question.

Time      Filter    Action Interface    Protocol    Src Addr
Dest Addr
07:45:48  pfe      A      ae4.0        8847        b0:c6:9a:b7:ef:c4
 80:ac:ac:69:e1:f4
07:45:48  pfe      A      ae4.0        8847        b0:c6:9a:b7:ef:c4
 80:ac:ac:69:e1:f4
07:45:46  pfe      A      ae4.0        8847        b0:c6:9a:b7:ef:c4
 80:ac:ac:69:e1:f4

So it appears that a family inet filter is matching a unicast mpls packet
by mac address improperly? I have no idea why any of these would be punted
to the RE at all.

A few PRs that seem related are PR1028537 which states "L2 Control
protocols fail to come up between CEs across an ethernet pseudowire".

and hilariously, PR1032007 suggests "As a workaround, consider using an
alternative IGP protocol such as OSPF"

We have a case open with Juniper on this, but looking to see if others have
experienced this.

Ultimately we're trying to get a better understanding of WHAT type of
packets are punted to RE CPU so we can properly craft an ACL and pray it
fits in TCAM.

--Chris
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


   
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] GRE packet fragmentation on j-series

2012-01-31 Thread nebu thomas
Pls refer the below appnote 
 
http://www.juniper.net/us/en/local/pdf/app-notes/3500192-en.pdf
 
see the section  



From: Ben Dale 
To: Lukasz Martyniak  
Cc: "Juniper-Nsp (juniper-nsp@puck.nether.net)"  
Sent: Tuesday, January 31, 2012 5:28 AM
Subject: Re: [j-nsp] GRE packet fragmentation on j-series

Hi Lukasz,

J-Series only needs a license to download signature updates for IDP - in order 
to stop fragmentation, all you need to do is create a security policy that 
matches on GRE traffic "match application junos-gre" and then references the 
idp engine in the action "then permit application-services idp".  

This will force the IDP engine to re-assemble the GRE fragments for inspection 
(but not actually inspect them).  

Juniper had a really good document explaining this with examples for MPLSoGRE, 
but my google and KB-fu is failing.

Cheers,

Ben

On 26/01/2012, at 7:17 PM, Lukasz Martyniak wrote:

> Thanks for quick response, i had a hoped that this could be done in other 
> whey. I think jseries need extra license for IDP. 
> 
> On Jan 24, 2012, at 11:35 PM, Alex Arseniev wrote:
> 
>> My understanding is that GRE fragmentation should occur if egress interface 
>> MTU is < GRE pkt size.
>> For GRE reassembly, you need IDP policy, this means high memory SRX model. 
>> IDP license is not needed.
>> Rgds
>> Alex
>> 
>> - Original Message - From: "Lukasz Martyniak" 
>> 
>> To: 
>> Sent: Tuesday, January 24, 2012 2:04 PM
>> Subject: [j-nsp] GRE packet fragmentation on j-series
>> 
>> 
>>> Hi all
>>> 
>>> I have some problem with gre tunnels. I need to fragment packages in 
>>> tunnel. I run gre between two jseries (junos 10.4R6) and lunch MPLS on it. 
>>> The problem looks like that packages with MTU above 1476 are not 
>>> fragmented/reassembled and are dropped.
>>> 
>>> 
>>> interfaces gr-0/0/0
>>> unit 10 {
>>>  clear-dont-fragment-bit;
>>>  description "Tulne to r1-lab";
>>>  tunnel {
>>>      source 10.200.0.1;
>>>      destination 10.200.0.2;
>>>      allow-fragmentation;
>>>      path-mtu-discovery;
>>>  }
>>>  family inet {
>>>      mtu 1500;
>>>      address 100.100.100.1/30;
>>>  }
>>>  family mpls {
>>>  }
>>> }
>>> 
>>> Have someone have similar problem ? is there a simple way to fix this ?
>>> 
>>> Best Lukasz
>>> ___
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



MPLSoGRE with GRE Fragmentation and Reassembly 
 
--Thanks 
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] srx with ethernet switching and chassis clustering

2011-08-01 Thread nebu thomas
Hi ,
 
 Reference  KB KB21422 
Layer 2 Ethernet switching, on SRX240 and SRX650 devices, is supported in 
chassis cluster mode from Junos OS Release 11.1 or later.
 
Thanks .


From: Richard Zheng 
To: juniper-nsp@puck.nether.net
Sent: Monday, August 1, 2011 7:58 AM
Subject: [j-nsp] srx with ethernet switching and chassis clustering

Hi,

We have a configuration with multiple VR to support multiple customers. Vlan
is used to trunk traffic into and out of SRX. While trying to do chassis
clustering, it seems vlan is not supported. How do you do chassis cluster
with multiple customers? Do you have dedicated interfaces for each customer?

Thanks,
Richard
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp