Hi,

Apply the same filter to both IFLs.

"Filter-specific" policer shares bandwidth if you use it multiple times in
the same filter (for example a policer referenced under multiple filter
terms)


If you use a filter applied to multiple IFLs and filter is NOT explicitly
defined as "interface-specific" (which is default) then policer is shared on
all filter instances where applied.

And hey, this will work only if IFLs where the filter applied are under the
same I-chip(PFE) group. There is no way to share policer instance between
different PFEs.

HTH,
Krasi

> -----Original Message-----
> From: juniper-nsp-boun...@puck.nether.net [mailto:juniper-nsp-
> boun...@puck.nether.net] On Behalf Of Bit Gossip
> Sent: 03.07.2009 5:30 PM
> To: Sean Clarke
> Cc: juniper-nsp
> Subject: Re: [j-nsp] firewall policer
> 
> Unfortunately I have tested it but the result is that the policer
> operates independently on the 2 interfaces with the result that the
> total out of the 2 GE is 2000k and not 1000k.
> 
> Any idea way and how I can get it to work in aggregate fashion.....
> 
> Thanks,
> bit.
> 
> On Wed, 2009-04-15 at 13:53 +0200, Sean Clarke wrote:
> > The way you have done it, the bandwidth will be shared
> >
> >
> > Adding filter-specific knob to the policer will make them unique ...
> i.e.
> >
> > policer P {
> >      filter-specific;<----
> >      if-exceeding {
> >          bandwidth-limit 1000k;
> >          burst-size-limit 15k;
> >      }
> >      then discard;
> > }
> >
> >
> >
> > On 4/15/09 1:33 PM, Bit Gossip wrote:
> > > platform MX480 junos 9.3
> > >
> > > in the following config the same policer is appllied to 2 different
> > > interfaces via 2 different firewall filters.
> > >
> > > Will the policer police at 1 mbps the aggregate traffic of the 2
> > > interfaces; or it will police independent at 1 mbps the 2 differrent
> > > interfaces?
> > >
> > >   ge-5/2/1 {
> > >      unit 0 {
> > >              filter {
> > >                  output F1;
> > >              }
> > >          }
> > >      }
> > > ge-5/2/2 {
> > >      unit 0 {
> > >              filter {
> > >                  output F2;
> > >              }
> > >          }
> > >      }
> > >
> > > policer P {
> > >      if-exceeding {
> > >          bandwidth-limit 1000k;
> > >          burst-size-limit 15k;
> > >      }
> > >      then discard;
> > > }
> > >
> > > filter F1 {
> > >      term NATIONAL {
> > >          from {
> > >              source-class C1;
> > >          }
> > >          then {
> > >              policer P;
> > >              count C1;
> > >              accept;
> > >          }
> > >      }
> > >      term REMAINING {
> > >          then {
> > >              count REMAINING;
> > >              accept;
> > >          }
> > >      }
> > > }
> > > filter F2 {
> > >      term NATIONAL {
> > >          from {
> > >              source-class C2;
> > >          }
> > >          then {
> > >              policer P;
> > >              count C2;
> > >              accept;
> > >          }
> > >      }
> > >      term REMAINING {
> > >          then {
> > >              count REMAINING;
> > >              accept;
> > >          }
> > >      }
> > > }
> > >
> > >
> > > _______________________________________________
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> > >
> > >
> >
> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to