Re: [j-nsp] Slow RE path 20 x faster then PFE path

2020-03-24 Thread Alexander Arseniev via juniper-nsp
--- Begin Message ---

Hello,
Well, my first advice - don't use interface-style service-sets until You 
100% understand what You are actually doing. Just don't.
Second - don't try to mimic SRX' NAPT-to-interface-address translation 
feature on MX with inline NAT, it is not supported, albeit technically 
possible and very complex. Just don't.
Third - don't tinker with static routes to next-table and similar stuff 
in conjunction with inline services.
Fourth - use nexthop-style service-sets with both ends of SI- IFL pair 
in different routing-instances. It is the most straightforward inline 
NAT config possible.
Hopefully that's enough to get You started , and without Your config I 
have no other ideas to share, perhaps others can chime in.

Thanks
Alex

-- Original Message --
From: "Robert Raszuk" 
To: "Alexander Arseniev" 
Cc: "Juniper List" 
Sent: 24/03/2020 08:24:36
Subject: Re: Re[2]: [j-nsp] Slow RE path 20 x faster then PFE path



Yes NAT is configured there as I indicated via presence of si- phantom 
load ... Having NAT there is not my idea though :). But sorry can not 
share the config.


If you could shed some more light on your comment how to properly 
configure it and what to avoid I think it may be very useful for many 
folks on this list.


Many thx,
R.



On Tue, Mar 24, 2020 at 5:00 AM Alexander Arseniev 
 wrote:

Hello,




Another interesting observation is that show command indicated 
services
inline input traffic over 33 Mpps zero output while total coming to 
the box

was at that time 1 Mpps 


Do You have inline NAT configured on this box? Is it possible to share 
the config please?
It is quite easy to loop traffic with NAT (inline or not) and while 
looped inside same box,
TTL does not get decremented so You end up with eternal PFE 
saturation.


Thanks
Alex--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Slow RE path 20 x faster then PFE path

2020-03-24 Thread Saku Ytti
You can confirm PPE load by:

RMPC0(r24.labxtx01.us.bb-re0 vty)# show xl-asic 0 npu_stats
NPU Stats Dump for FPC0:NPU0
Global NPU Utilization:  0.

(xl or lu, depending on platform)

Also piping output to file and aggregating: 'show xl-asic 0 ppe cntx'
can be very useful to see what the PPEs are doing, particularly when
you have datapoint from working and non-working box, aggregation how
many PPEs contexts are in which Ucode address will be a dead
give-away.

I was recently troubleshooting issue where on some reboots PPS
performance is good, some reboots less good, and looking at the PPE
contexts was illuminating for that. I suspect if this is PPE spending
time on NAT thing, then on bad box global NPU load should be high as
well as context uCode address distribution to be widly different (good
box mosty sleeping).





On Tue, 24 Mar 2020 at 10:31, Robert Raszuk  wrote:
>
> Yes NAT is configured there as I indicated via presence of si- phantom load
> ... Having NAT there is not my idea though :). But sorry can not share the
> config.
>
> If you could shed some more light on your comment how to properly configure
> it and what to avoid I think it may be very useful for many folks on this
> list.
>
> Many thx,
> R.
>
>
>
> On Tue, Mar 24, 2020 at 5:00 AM Alexander Arseniev 
> wrote:
>
> > Hello,
> >
> >
> >
> > Another interesting observation is that show command indicated services
> > inline input traffic over 33 Mpps zero output while total coming to the box
> > was at that time 1 Mpps 
> >
> >
> > Do You have inline NAT configured on this box? Is it possible to share the
> > config please?
> > It is quite easy to loop traffic with NAT (inline or not) and while looped
> > inside same box,
> > TTL does not get decremented so You end up with eternal PFE saturation.
> >
> > Thanks
> > Alex
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



--
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Slow RE path 20 x faster then PFE path

2020-03-24 Thread Robert Raszuk
Yes NAT is configured there as I indicated via presence of si- phantom load
... Having NAT there is not my idea though :). But sorry can not share the
config.

If you could shed some more light on your comment how to properly configure
it and what to avoid I think it may be very useful for many folks on this
list.

Many thx,
R.



On Tue, Mar 24, 2020 at 5:00 AM Alexander Arseniev 
wrote:

> Hello,
>
>
>
> Another interesting observation is that show command indicated services
> inline input traffic over 33 Mpps zero output while total coming to the box
> was at that time 1 Mpps 
>
>
> Do You have inline NAT configured on this box? Is it possible to share the
> config please?
> It is quite easy to loop traffic with NAT (inline or not) and while looped
> inside same box,
> TTL does not get decremented so You end up with eternal PFE saturation.
>
> Thanks
> Alex
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp