Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-22 Thread Dan Peachey
Hey,

Other posters are correct... IPv6 routes are not requied, it forwards based
on received label as next hop is already known. Confused myself as I was
distributing full IPv6 table in my lab testing and assumed it was working
because of that, but it's not the case.

Cheers,

Dan


On Sun, 22 Jul 2018 at 20:23, Andrey Kostin  wrote:

> Hi Dan,
>
> Thanks for answering. All routers have family inet6 configured on all
> participating interfaces, because other v6 traffic is forwarded without
> MPLS, so we are safe for that.
>
>
> Kind regards,
> Andrey
>
> Dan Peachey писал 20.07.2018 16:40:
>
> >>
> >  Hi,
> >
> > Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it
> > knows how
> > to IPv6 route to the destination. The last LSR->LER hop should just
> > be IPv6
> > routed in that case.
> >
> > I've noticed this behaviour before whilst playing with 6PE on lab
> > devices.
> > It would of course break if you were running IPv4 core only.
> >
> > Cheers,
> >
> > Dan
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-20 Thread Dan Peachey
On Fri, 20 Jul 2018 at 21:00, Andrey Kostin  wrote:

>
> Hello juniper-nsp,
>
> I've accidentally encountered an interesting behavior and wondering if
> anyone already seen it before or may be it's documented. So pointing to
> the docs is appreciated.
>
> The story:
> We began to activate ipv6 for customers connected from cable network
> after cable provider eventually added ipv6 support. We receive prefixes
> from cable network via eBGP and then redistribute them inside our AS
> with iBGP. There are two PE connected to cable network and receiving
> same prefixes, so for traffic load-balancing we change next-hop to
> anycast loopback address shared by those two PE and use dedicated LSPs
> to that IP with "no-install" for real PE loopback addresses.
> IPv6 wasn't deemed to use MPLS and existing plain iBGP sessions between
> IPv6 addresses with family inet6 unicast were supposed to be reused.
> However, the same export policy with term that changes next-hop for
> specific community is used for both family inet and inet6, so  it
> started to assign IPv4 next-hop to IPv6 prefixes implicitly.
>
> Here is the example of one prefix.
>
> ## here PE receives prefix from eBGP neighbor:
>
> @re1.agg01.LLL2> show route ::e1bc::/46
>
> inet6.0: 52939 destinations, 105912 routes (52920 active, 1 holddown,
> 24 hidden)
> + = Active Route, - = Last Active, * = Both
>
> ::e1bc::/46*[BGP/170] 5d 13:16:26, MED 100, localpref 100
>AS path:  I, validation-state: unverified
>  > to :::f200:0:2:2:2 via ae2.202
>
> ## Now PE advertises it to iBGP neighbor with next-hop changed to plain
> IP:
> @re1.agg01.LLL2> show route ::e1bc::/46
> advertising-protocol bgp ::1::1:140
>
> inet6.0: 52907 destinations, 105843 routes (52883 active, 6 holddown,
> 24 hidden)
>Prefix  Nexthop  MED LclprefAS
> path
> * ::e1bc::/46 YYY.YYY.155.141  100 100
> I
>
> ## Same output as above with details
> {master}
> @re1.agg01.LLL2> show route ::e1bc::/46
> advertising-protocol bgp ::1::1:140 detail  ## Session is
> between v6 addresses
>
> inet6.0: 52902 destinations, 105836 routes (52881 active, 3 holddown,
> 24 hidden)
> * ::e1bc::/46 (3 entries, 1 announced)
>   BGP group internal-v6 type Internal
>   Nexthop: YYY.YYY.155.141   ## v6
> prefix advertised with plain v4 next-hop
>   Flags: Nexthop Change
>   MED: 100
>   Localpref: 100
>   AS path: []  I
>   Communities: :10102 no-export
>
>
> ## iBGP neighbor receives prefix with tooled next hop and uses
> established LSPs to forward traffic:
> u...@re0.bdr01.lll> show route ::e1bc::/46
>
> inet6.0: 52955 destinations, 323835 routes (52877 active, 10 holddown,
> 79 hidden)
> + = Active Route, - = Last Active, * = Both
>
>
> ::e1bc::/46*[BGP/170] 5d 13:01:12, MED 100, localpref 100, from
> ::1::1:240
>AS path:  I, validation-state: unverified
>to YYY.YYY.155.14 via ae1.0, label-switched-path
> BE-bdr01.LLL--agg01.LLL2-1
>to YYY.YYY.155.9 via ae12.0, label-switched-path
> BE-bdr01.LLL--agg01.LLL2-2
>to YYY.YYY.155.95 via ae4.0, label-switched-path
> BE-bdr01.LLL--agg01.LLL-1
>to YYY.YYY.155.9 via ae12.0, label-switched-path
> BE-bdr01.LLL--agg01.LLL-2
>
> u...@re0.bdr01.lll> show route ::e1bc::/46 detail | match
> "Protocol|:|BE-"
> ::e1bc::/46 (3 entries, 1 announced)
>  Source: ::1::1:240
>  Label-switched-path BE-bdr01.LLL--agg01.LLL2-1
>  Label-switched-path BE-bdr01.LLL--agg01.LLL2-2
>  Label-switched-path BE-bdr01.LLL--agg01.LLL-1
>  Label-switched-path BE-bdr01.LLL--agg01.LLL-2
>  Protocol next hop: :::YYY.YYY.155.141
> ### Seems that IPv4 next hop has been converted to compatible form
>  Task: BGP_.::1::1:240
>  Source: ::1::7
>
> ## The policy assigning next-hop is the same for v4 and v6 sessions,
> only one term is shown:
> @re1.agg01.LLL2> show configuration protocols bgp group internal-v4
> export
> export [ deny-rfc3330 to-bgp ];
>
> {master}
> @re1.agg01.LLL2> show configuration protocols bgp group internal-v6
> export
> export [ deny-rfc3330 to-bgp ];
>
>
> @re1.agg01.LLL2> show configuration policy-options policy-statement
> to-bgp | display inheritance no-comments
> term - {
>  from {
>  community -;
>  tag 33;
>  }
>  then {
>  next-hop YYY.YYY.155.141;
>  accept;
>  }
> }
>
>
> u...@re0.bdr01.lll> show route forwarding-table destination
> ::e1bc::/46
> Routing table: default.inet6
> 

Re: [j-nsp] IPv6 VRRP on ACX

2018-04-30 Thread Dan Peachey
On 30 April 2018 at 11:23, Dan Peachey <d...@peachey.co> wrote:

> Hi,
>
> I'm trying to get IPv6 VRRP working between two ACX5048 running JunOS
> 15.1X54-D61.6 and so far failing.
>
> I have configured it as per the following example:
>
> https://www.juniper.net/documentation/en_US/junos/
> topics/example/vrrp-for-ipv6-configuring-example.html
>
> Although not mentioned in the above, I've also enabled VRRPv3.
>
> Problem I've got is that both sides are stuck in master state. I know that
> there are no L2 path issues as IPv4 VRRP is working fine on the same path.
>
> Anyone else had issues with IPv6 VRRP on this platform?
>
> Cheers,
>
> Dan
>

Hi,

This turned out to be lo0 filter. We'd allowed traffic to VRRP multicast
address but only from link-local source. Once we added relevant GUA source
to the filter it's started working fine.

Thanks to off-list posters for the pointers/help.

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] IPv6 VRRP on ACX

2018-04-30 Thread Dan Peachey
Hi,

I'm trying to get IPv6 VRRP working between two ACX5048 running JunOS
15.1X54-D61.6 and so far failing.

I have configured it as per the following example:

https://www.juniper.net/documentation/en_US/junos/topics/example/vrrp-for-ipv6-configuring-example.html

Although not mentioned in the above, I've also enabled VRRPv3.

Problem I've got is that both sides are stuck in master state. I know that
there are no L2 path issues as IPv4 VRRP is working fine on the same path.

Anyone else had issues with IPv6 VRRP on this platform?

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Limit content of bgp.l3vpn.0

2016-09-28 Thread Dan Peachey
On 28 September 2016 at 14:47, Saku Ytti  wrote:
> On 28 September 2016 at 16:38, Johan Borch  wrote:
>
>> Will router-target-family work even if it is cisco in one end?
>
> Yes, IOS supports route-target SAFI.
>
> --
>   ++ytti


Hi,

I may be missing something, but shouldn't the default behaviour be to
drop all VPN routes that are not locally imported into a VRF? Unless
you configure "keep all" or the PE is also configured as an RR.

A quick test on a PE not running as an RR:

Before "keep all" configured on the BGP group:

--
admin@lab1> show route table bgp.l3vpn.0

bgp.l3vpn.0: 37 destinations, 37 routes (37 active, 0 holddown, 0 hidden)
--

After "keep all" configued on the BGP group:

--
admin@lab1> show route table bgp.l3vpn.0

bgp.l3vpn.0: 165562 destinations, 165562 routes (165562 active, 0
holddown, 0 hidden)
--

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Dan Peachey


On 19/02/2016 10:53, Alexander Marhold wrote:

Hi



You wrote:


One thing I haven't seen mentioned is that routers need indirect-nexthop 
feature enabled




IMHO exactly this is also called PIC   (prefix independent convergence) so to 
be exact to get a prefix amount independent convergence you need a pointer to a 
nexthop-pointer-structure which then points to the next-hop.

In case of a change of the nexthop you only need to change the pointer in the 
next-hop-pointer-structure independent how many prefixes are using that 
next-hop.



Regards



alexander



Cisco often call this PIC core (hierarchical FIB). I think the different 
terms used by the different vendors causes some confusion. From what I 
understand...


Cisco:

H-FIB = PIC Core
Node protection = PIC Edge Node Protection
Link protection = PIC Edge Link Protection

Juniper:

H-FIB = Indirect next-hop
Node protection = PIC Edge
Link protection = Provider edge link protection

However I've also seen node protection referred to as PIC core in some 
Cisco documentation, so who knows :)


Regards,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Q re: 6PE on a native IPv6 enabled network

2016-01-11 Thread Dan Peachey
On 11 January 2016 at 18:53, Michael Hare  wrote:

> j-nsp,
>
> I'd to deploy 6PE on an existing dual stack network so that native IPv6
> prefixes can take advantage of path benefits MPLS has to offer.  In my
> setup it seems that traffic from PE1 to PE2 [PE2 router id: x.x.32.8] is
> being load balanced between MPLS and native IPv6, showing equal cost
> protocol next hops of :::x.x.32.8 and y:y:0:100::8.
>
> If I want to force all v6 traffic into MPLS, is the only solution to
> disable native family inet6 iBGP between PE and RR, or is there a way to
> make the native IPv6 protocol next hop less preferable in some way?
>
> -Michael
>
> /
>
> Please note equal cost path to z:z::/32 which is learned from CE on PE2.
>
> user@PE1-re0> show route z:z:: active-path
>
> inet6.0: 213 destinations, 287 routes (213 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
>
> z:z::/32 *[BGP/170] 6d 06:17:47, MED 0, localpref 2020, from
> y:y:0:100::a
>   AS path: 46103 I, validation-state: unverified
>   to x.x.33.123 via ae0.3449, Push 301680
>   to x.x.33.130 via ae2.3459, Push 315408
> > to fe80::86b5:9c0d:790d:d7f0 via ae0.3449
>   to fe80::86b5:9c0d:8393:d92a via ae2.3459
>
> user@PE1-re0> show route z:z:: active-path detail
>
> inet6.0: 213 destinations, 287 routes (213 active, 0 holddown, 0 hidden)
> z:z::/32 (5 entries, 1 announced)
> *BGPPreference: 170/-2021
> Next hop type: Indirect
> Address: 0x3574754
> Next-hop reference count: 6
> Source: y:y:0:100::a
> Next hop type: Router, Next hop index: 1048816
> Next hop: x.x.33.123 via ae0.3449 weight 0x1
> Label operation: Push 301680
> Label TTL action: prop-ttl
> Load balance label: Label 301680: None;
> Session Id: 0x151
> Next hop: x.x.33.130 via ae2.3459 weight 0xf000
> Label operation: Push 315408
> Label TTL action: prop-ttl
> Load balance label: Label 315408: None;
> Session Id: 0x14f
> Next hop type: Router, Next hop index: 1048583
> Next hop: fe80::86b5:9c0d:790d:d7f0 via ae0.3449 weight
> 0x1, selected
> Session Id: 0x14b
> Next hop: fe80::86b5:9c0d:8393:d92a via ae2.3459 weight
> 0xf000
> Session Id: 0x143
> Protocol next hop: :::x.x.32.8
> Indirect next hop: 0x358f478 1048718 INH Session ID: 0x1e7
> Protocol next hop: y:y:0:100::8
> Indirect next hop: 0x35d8b68 1048578 INH Session ID: 0x14e
> State: 
> Local AS: 65400 Peer AS: 65400
> Age: 6d 6:17:39 Metric: 0   Metric2: 2055
> Validation State: unverified
> Task: BGP_65400.y:y:0:100::a+179
> Announcement bits (3): 0-KRT 2-RT 6-Resolve tree 2
> ...
> ...
>

Hi,

Can you post your BGP and MPLS config? It works OK in my lab (labelled
route preferred over un-labelled route).

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] EX4300 - Too many VLAN-IDs on interface

2015-10-21 Thread Dan Peachey
Hi,

I just hit an issue where I tried to configure 2000 VLAN's on a LAG and got
the error message "Too many VLAN-IDs on interface". If I remove the
interface from the LAG I am able to configure all the VLAN's without issue.
Based on the unit number that threw the error it seems like the number of
VLAN's configurable is limited to 1024.

The LAG had 2 member interfaces, each of which were on separate switches in
a VC stack (2 switches). Encapsulation on the LAG was
"extended-vlan-bridge".

Anyone else seen this? Is this a known limitation?

JunOS version is 13.2X51-D36.1.

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS buffer size

2015-07-03 Thread Dan Peachey
Hi Adam,

My understanding was that you might be able to oversubscribe only using PIR
 (need to test).
 And in that case all the queues are in the excess region.
 So only the excess priorities are honoured (HI and LO in strict priority
 fashion) and queues with the same priority are serviced round robin.


You can oversubscribe both PIR and CIR (G-rate) so you have to be careful
how much G-rate you allocate (if you are selling a PIR/CIR service that
is). In H-CoS mode with only PIR set all queues are in excess even if the
aggregate of all the shapers does not oversubscribe the interface bandwidth
(or aggregate shaper). In per-unit mode with only PIR set, you have to
oversubscribe the shapers to end up with all queues in excess.


 I also thought that weight with which the queues in excess region are
 served is proportional to the transmit-rate (as percentage of VLAN PIR).

Though looking at the show outputs and reading your test results it looks
 like it's not the case.


Well, the weights are determined from the transmit-rates but the queues
aren't proportioned in the way you'd expect them to be relative to the
transmit-rates. For example, you'll find that you can get packet loss in a
queue that is sending less than contracted rate if you oversubscribe
another queue at the same priority level. The weightings don't translate to
an exact percentage of bandwidth in reality.


 But have you tried setting excess-rate for the queues - that should be
 honoured while a given queue is in excess region right?


Yep, but if you set excess-rate = transmit-rate then the weights are just
the same as if you hadn't set them and it doesn't affect the behaviour of
the queues.


 I just can't believe that once in excess region the queues (using %) are
 using main interface PIR -is it a bug please?


It's more a case that per-queue guaranteed rates are set to zero when you
are in H-CoS or oversubscribed per-unit mode which means that you have to
factor in G-rate for each node when it would be nice not to have to bother
(and you can not bother, but things don't work quite the way you expect
them to).


 With regards to buffers
 May I oversubscribe buffers by configuring say delay-buffer-rate percent
 10 for 11 VLANs on a physical interface please?


Not sure on that one, never tried to be honest.

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS buffer size

2015-06-25 Thread Dan Peachey
On 24 June 2015 at 21:05, Saku Ytti s...@ytti.fi wrote:

 On 24 June 2015 at 22:29, Dan Peachey d...@illusionnetworks.com wrote:

 Hey,

  I thought the weights were determined by the %? The weights are then used
  to schedule the queues appropriately. Even if the queues are in excess,
  they should be weighted correctly?

 I tested this when Trio came out, and couldn't get QoS working,
 because my AFB class was congesting my BE class fully. Solution was to
 configure guaranteed-rate==shaping-rate, at which point the
 percentages were honoured. This is actually what DPCE generation did
 by default. I think the reason for grate being 0 by default, is
 because trio supports additional level of shaping, in which use-case
 it may make sense.

 As I recall you can oversub grate, but I can't recall testing it (For
 my scenarios, majority of traffic is non-qos traffic, so it's not an
 issue either way).


Hi Saku,

I've run some tests and experienced the same thing as you. I had BE traffic
affecting one of my other queues which was not oversubscribed and caused
packet loss. Setting the guaranteed rate = shaping rate seemed to fix it.
Seems odd to me that this needs to be done. Documentation I've read appears
to suggest that in PIR mode (no guaranteed-rate set) the per-queue
guarantee/transmit rate is calculated from the shaper rate and when a queue
exceeds it's guaranteed rate it is in excess, but this doesn't appear to be
the case and queues are always in excess.

Thanks,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS buffer size

2015-06-25 Thread Dan Peachey
On 25 June 2015 at 15:48, Marcin Wojcik mail.network.b...@gmail.com wrote:

 Hi Dan,

  Seems odd to me that this needs to be done. Documentation I've read
 appears
  to suggest that in PIR mode (no guaranteed-rate set) the per-queue
  guarantee/transmit rate is calculated from the shaper rate and when a
 queue
  exceeds it's guaranteed rate it is in excess, but this doesn't appear to
 be
  the case and queues are always in excess.

 That would be the case only if you use exact or rate-limit limit
 options. When those options are omitted (percentage value only) then
 the transmit rate is calculated on the parent interface PIR, hence,
 one has to setup g-rate.

 Thanks,
 Marcin.



Thanks, makes sense now.

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS buffer size

2015-06-24 Thread Dan Peachey
Thanks Steven and Saku, that's what I was looking for.

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS buffer size

2015-06-24 Thread Dan Peachey
On 23 June 2015 at 17:43, Saku Ytti s...@ytti.fi wrote:

 I don't have access to any JNPR box right now, so can't give exact command.

 But as you're using QX for scheduling, you'll need the chipID, and
 L2/L3 index, and taildrop index, then you can use 'show qx ..' to
 fetch the size of the tail, which will effectively tell the buffer
 size in bytes.

 I usually decide how many buffer I want in queue, then solve X in:

 shaper_rate * queue_percent * X = n bytes

 Then I'll just configure queue temporal queue with value X.



I must be missing something, but it seems that regardless of what I set as
a temporal buffer, the byte buffer value assigned doesn't appear to change.

I set all 5 queues with temporal buffer values:


xe-0/0/1   1640   0   0   0
  xe-0/0/1.20003291   0   0   0
q 1 - pri 0/0372720 24%  165000  0%
q 2 - pri 0/0372720  2%  50  0%
q 3 - pri 2/0372720  1%  50  0%
q 4 - pri 0/0372720 24%  165000  0%
q 5 - pri 3/0372720 24%   2  0% exact


The byte values don't make any sense to me:

NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 33 0 0
Tail drop rule configuration   : 33
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 0840
   threshold: 2686976 bytes

NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 34 0 0
Tail drop rule configuration   : 34
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 0880
   threshold: 2801664 bytes

NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 35 0 0
Tail drop rule configuration   : 35
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 08c0
   threshold: 2932736 bytes

NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 36 0 0
Tail drop rule configuration   : 36
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 0900
   threshold: 3063808 bytes
shift   : 14
 mantissa   : 187

NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 37 0 0
Tail drop rule configuration   : 37
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 0940
   threshold: 3194880 bytes


The two queues that have the same transmit-percent and buffer-size (33 
36) even have different byte sizes.

What am I missing?

Thanks,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS buffer size

2015-06-24 Thread Dan Peachey

 Hey Dan,


  I must be missing something, but it seems that regardless of what I set
 as
  a temporal buffer, the byte buffer value assigned doesn't appear to
 change.

 Can you give

 CLI config (TCP, shaper and queues)
 show cos halp ifl X
 show qx N tail-rule Y 0 0
 show qx N q Z queue-length

 With two different temporal values in queues.

 Remember that you must configure guaranteed-rate for shaping to work in QX
 as
 accustomed to DPCE and MQ. Otherwise all your queues are in excess region.


Hi Saku,

Requested outputs below (this is not a finished policy, for now I'm just
playing with buffers mainly).

IFD is in PIR mode as only PIR is configured (interface could be
oversubscribed so configuring guaranteed rate doesn't make sense). As I
understand it, queues are therefore always in excess (apart from
rate-limited queue) with weight proportional to the transmit-rate (correct
me if I am wrong).

Thanks,

Dan

--

class-of-service {
traffic-control-profiles {
10M {
scheduler-map 10M_COS;
shaping-rate 10m;
}
}
interfaces {
xe-0/0/1 {
unit 2000 {
output-traffic-control-profile 10M;
}
}
}
scheduler-maps {
10M_COS {
forwarding-class Q5 scheduler RT;
forwarding-class Q2 scheduler SIG;
forwarding-class Q1 scheduler PRI;
forwarding-class Q3 scheduler NC;
forwarding-class Q0 scheduler BE;
}
}
schedulers {
NC {
transmit-rate percent 1;
buffer-size temporal 500k;
priority medium-high;
excess-priority high;
}
RT {
transmit-rate {
percent 24;
rate-limit;
}
buffer-size temporal 20k;
priority high;
}
SIG {
transmit-rate percent 2;
buffer-size temporal 500k;
}
PRI {
transmit-rate percent 24;
buffer-size temporal 165k;
}
BE {
transmit-rate percent 24;
buffer-size temporal 165k;
}
}
}


NPC0(pe1-RE0 vty)# show cos halp ifl 329
IFL type: Basic


IFL name: (xe-0/0/1.2000, xe-0/0/1)   (Index 329, IFD Index 164)
QX chip id: 0
QX chip dummy L2 index: 4
QX chip Scheduler: 4
QX chip L3 index: 4
QX chip base Q index: 32
Number of queues: 8
QueueStateMax   Guaranteed   Burst  Weight Priorities
Drop-Rules
Index rate rate  sizeGE   Wred
 Tail
-- --- ---  --- -- --
--
32  Configured10000  131072320   GL   EL4
  0
33  Configured10000  131072320   GL   EL4
  0
34  Configured10000  131072 26   GL   EL4
  0
35  Configured10000  131072 13   GM   EH4
127
36  Configured10000  131072  1   GL   EL0
255
37  Configured1000  240  131072320   GH   EH4
193
38  Configured10000  131072  1   GL   EL0
255
39  Configured10000  131072  1   GL   EL0
255

Rate limit info:
Q 5: Bandwidth = 240, Burst size = 73536. Policer NH:
0x3077afaa0003b000

Index NH: 0xda4be18980801006


NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 33 0 0
Tail drop rule configuration   : 33
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 0840
   threshold: 2686976 bytes
shift   : 14
 mantissa   : 164
 Drop Engine 1   :
   Tail drop rule ram address   : 0840
   threshold: 2686976 bytes
shift   : 14
 mantissa   : 164


NPC0(pe1-RE0 vty)# show qxchip 0 tail-rule 34 0 0
Tail drop rule configuration   : 34
ref_count: 0
 Drop Engine 0   :
   Tail drop rule ram address   : 0880
   threshold: 2801664 bytes
shift   : 14
 mantissa   : 171
 Drop Engine 1   :
   Tail drop rule ram address   : 0880
   threshold: 2801664 bytes
shift   : 14
 mantissa   : 171


NPC0(pe1-RE0 vty)# show qxchip 0 q 32 queue-length
QX Queue Index: 32
   Current instantaneous queue depth : 0 bytes

   WRED TAQL queue depth : 0 bytes

   Configured queue depth:
 region   color   queue-depth
 --   -   ---
   0   0 4096
   0   1 4096
   0   2 4096
   0   3 4096
  

[j-nsp] CoS buffer size

2015-06-23 Thread Dan Peachey
Hi all,

I have an IFL on a 10G interface configured with a traffic-control-profile
that has a 10M shaper and references a scheduler-map with 5 queues. The
platform is MX960 with MPC2E-3D-Q linecard.

I would like to find out the per-queue buffer sizes as configured on the
PFE in bytes. Is there a command to show that?

I know I could work it out but if there is a command to show it and I can
be lazy then I'm all for that :)

If I delve into the PFE I can look at the scheduler hierarchy for the IFL
(hopefully the formatting keeps):


NPC0(pe1-RE0 vty)# show cos scheduler-hierarchy

--snip--

class-of-service egress scheduler hierarchy - rates in kbps
-
shaping guarntd delaybf  excess
interface name   indexraterateraterate
 other
 -  --- --- --- ---
-
xe-0/0/1   1640   0   0   0
  xe-0/0/1.20003291   0   0   0
q 1 - pri 0/0372720 24% 24%  0%
q 2 - pri 0/0372720  2%  2%  0%
q 3 - pri 2/0372720  1%  5%  0%
q 4 - pri 0/0372720 24% 24%  0%
q 5 - pri 3/0372720 24%5000  0% exact
  xe-0/0/1.32767   429020002000   0
q 0 - pri 0/120 95% 95%  0%
q 3 - pri 0/120  5%  5%  0%


But that doesn't give me the buffer sizes in bytes, only as it's configured
in the config.

If I look closer at the IFL specifically I get:


NPC0(pe1-RE0 vty)# show cos halp ifl 329
IFL type: Basic


IFL name: (xe-0/0/1.2000, xe-0/0/1)   (Index 329, IFD Index 164)
QX chip id: 0
QX chip dummy L2 index: 4
QX chip Scheduler: 4
QX chip L3 index: 4
QX chip base Q index: 32
Number of queues: 8
QueueStateMax   Guaranteed   Burst  Weight Priorities
Drop-Rules
Index rate rate  sizeGE   Wred
 Tail
-- --- ---  --- -- --
--
32  Configured10000  131072  1   GL   EL0
255
33  Configured10000  131072320   GL   EL4
 10
34  Configured10000  131072 26   GL   EL4
  2
35  Configured10000  131072 13   GM   EL4
131
36  Configured10000  131072320   GL   EL4
 10
37  Configured1000  240  131072320   GH   EH4
193
38  Configured10000  131072  1   GL   EL0
255
39  Configured10000  131072  1   GL   EL0
255

Rate limit info:
Q 5: Bandwidth = 240, Burst size = 73536. Policer NH:
0x3077af9a0003c000

Index NH: 0xda4be18996801006



That shows me the burst size in bytes but that is for the overall egress
10M shaper I assume.

So I wonder if there is something that will show me it on a per-queue basis?

Thanks,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] transmit-rate percent shaping-rate working together

2015-06-19 Thread Dan Peachey
Hi Adam,

Try embedding it in a TCP, like so:

class-of-service {
traffic-control-profiles {
SHAPER {
scheduler-map SCHEDULER;
shaping-rate 4m;
}
}
interfaces {
ge-1/3/0 {
unit 0 {
output-traffic-control-profile SHAPER;
}
}
}
}

Cheers,

Dan

On 19 June 2015 at 16:00, Adam Vitkovsky adam.vitkov...@gamma.co.uk wrote:

 Hi Folks,

 Is there a way to make the set class-of-service schedulers Class_1
 transmit-rate percent 20
 to actually use class-of-service interfaces ge-1/3/0 unit 0 shaping-rate
 4m
 when allocating BW for the class please?

 Thank you very much

 adam







 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multiple policers for interface/units

2015-06-04 Thread Dan Peachey
On 2 June 2015 at 21:15, Chris Adams c...@cmadams.net wrote:

 I have used policers on units to limit the traffic for a particular
 VLAN, but now I have a need to limit the total traffic on an interface.
 I have a gigE link that is telco-limited to 500Mbps (but I need to
 police the link so I don't put more than 500M in), with several VLANs
 that each need to have their own rate.

 I haven't done that before; what's the best way to do that?

 This is on an MX960.
 --
 Chris Adams c...@cmadams.net



Hi Chris,

I've done aggregate policing before although not hierarchical, but I'll
have a go at suggesting what might work.

The aggregate policing can be achieved with a firewall filter and policer
combo and under the policer you need 'physical-interface-policer'. This
needs to be applied to all IFL's.

Then I think you can police each IFL with the 'policer' command. The output
policers should be evaluated after the firewall filters so in theory it
should work.

I haven't tested it but would be interested to know if you get it to work.

Config would look something like:

firewall {
family inet {
filter AGG_POLICE_500M {
physical-interface-filter;
term POLICE {
then {
policer POLICER_AGG_500M;
}
}
}
}
policer POLICER_AGG_500M {
physical-interface-policer;
if-exceeding {
bandwidth-limit 500m;
burst-size-limit 312500;
}
then discard;
}
policer POLICER_100M {
if-exceeding {
bandwidth-limit 100m;
burst-size-limit 62500;
}
then discard;
}
}
interfaces {
ge-0/0/0 {
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 100 {
vlan-id 100;
family inet {
filter {
output AGG_POLICE_500M;
}
policer {
output POLICER_100M;
}
}
}
unit 200 {
vlan-id 200;
family inet {
filter {
output AGG_POLICE_500M;
}
policer {
output POLICER_100M;
}
}
}
}
}


Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp