Re: [j-nsp] router reflector clients and non-clients

2018-05-30 Thread Christopher E. Brown


Using different cluster-ids you end up with multiple copies of all the
prefixes but done correctly it works just fine and provides better
redundancy at the cost of higher "path" count.

If you are running single cluster-id and pushing router memory limits it
would be a bad idea.

If you have the memory, dual RRs with different cluster-ids and
multipath are not a bad thing.

Not like I have a fairly large network doing exactly that since 2011
without issue.

On 5/30/18 00:02, Victor Sudakov wrote:
> Alexander Marhold wrote:
>>> Instead you should not even connect RR1 and RR2 together 
>>> And treat RR infrastructure built from RR1s I their respective clusters as
>> a
>>> separate infrastructure to RR2s.
>>> This is the proper way
>>
>> NO,NO,NO
>> This was the proper way in 1995, but not actual as...
>> (Unfortunately most BGP books have been written at that time and are still
>> sold...)
>>
>> Never do this as it can lead to missing routing updates if a client A is
>> connected to RR-1 only and Client B connected to RR-2 only ( because of link
>> outages) then A does not get the routes from B and vice versa
>>
>> Therefore make each RR with a unique cluster-id ( recommended identical
>> to router-id ) and then you can either make a normal ibgp connection between
>> both RRs or each one is the client of the other one
> 
> But in this scenario, a client will send an update both to RR1 and
> RR2, and RR1 will reflect this update to RR2, and RR2 will reflect it
> to RR1, and voila! We have a routing loop.
> 
>>
>> There are nice explanations on the Internet backing me up ( Ivan Peplnjak,
>>  http://packetpushers.net/bgp-rr-design-part-1/ 
>> http://orhanergun.net/2015/02/bgp-route-reflector-clusters/
> 
> Thanks, will read those.
> 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-15 Thread Christopher E. Brown

The MPC2-Q is an advanced per unit queueing card and has QX, it also
runs against the lower/original fabric rate.

The 16XGE rate is a port mode card with no QX, and it supports the first
of the fabric speed increases.

The published capacity of the MPC2-Q is 30G per MIC with a supplied by
Juniper actual worst-case figure of 31.7G, climbing to about 39G with
larger frames.

This matches with my own test results and it does not change with SCB model.


On 1/15/16 03:47, Adam Vitkovsky wrote:
>> From: Saku Ytti [mailto:s...@ytti.fi]
>> Sent: Friday, January 15, 2016 10:18 AM
>> On 15 January 2016 at 03:13, Christopher E. Brown
>> <chris.br...@acsalaska.net> wrote:
>>> When the same folks were asked about the 16XGE card and the 120G (and
>>> later 160G) performance it was indicated that there was an additional
>>> layer of logic/asics used to tie all 4 trios in the 16XGE to the bus
>>> and that these ASICs offloaded some of the bus related overhead
>>> handling from the TRIOs, freeing up enough capacity to allow each TRIO in
>> the 16XGE to provide a full 40G duplex after jcell/etc overhead.
>>
>>
>> Sorry Christopher for being suspicious, but I think you must have done some
>> mistake in your testing.
>>
>> Only difference that I can think of, on top of the multicast replication, is 
>> that
>> 16XGE does not have TCAM. But that does not matter, as the TCAM isn't
>> used for anything in MPC1/MPC2, it's just sitting there.
>> MPC1/MPC2 can be bought without QX. You specifically metnion '3D-Q'.
>> If you were testing with QX enabled, then it's wholly different thing.
>> QX was never dimensioned to push all traffic in every port via QX, it's very
>> very much underdimensioned for this. If MQ can do maybe ~70Gbps
>> memory BW, QX can't do anywhere near 40Gbps. So if you enable QX or
>> ingress+egress, you're gonna have very very limited performance.
>>
> Saku is right, if you do the math then 40.960Gbps is a theoretical maximum 
> for QX as a whole.
> 
> adam
> 
> 
> 
> Adam Vitkovsky
> IP Engineer
> 
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
> 
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
> this email are confidential to the ordinary user of the email address to 
> which it was addressed. This email is not intended to create any legal 
> relationship. No one else may place any reliance upon it, or copy or forward 
> all or any of it in any form (unless otherwise notified). If you receive this 
> email in error, please accept our apologies, we would be obliged if you would 
> telephone our postmaster on +44 (0) 808 178 9652 or email 
> postmas...@gamma.co.uk
> 
> Gamma Telecom Limited, a company incorporated in England and Wales, with 
> limited liability, with registered number 04340834, and whose registered 
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of 
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
> 
> 
> 


-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown
On 1/14/2016 1:48 PM, Jeff wrote:
> Am 14.01.2016 um 23:19 schrieb Christopher E. Brown:
>>
>>
>> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
>> with, but nobody in their right mind mixes DPCs and MPCs.
>>
> 
> Why is that? The mentioned 16x 10G card actually sounds interesting but is 
> still quite
> expensive compared to the older DPCEs with 4x 10GE so we had them in mind for 
> our growth
> and are planning to use them together with the existing DPCEs. Why wouldn't 
> you do it?
> 
> 
> Thanks,
> Jeff
> 

DPCs and MPCs have different engines with different features available.  
Running even one
DPC prevents running in enhanced mode limites features on the MPCs.  
Historically, mixing
DPCs and MPCs was also asking to get bitten by "bug of the week".


I worked with and tested all MX hardware but downchecked MX for CarrierE use 
until the
trio based MPC2-Q cards were available due to limited traffic control/queueing 
features.


I deployed with and still use the 16XGE cards.  Initial build (early 2012) was 
with
RE2000/SCB and we used 3 out of 4 10G in each bank.  The NxXGE dist boxes were 
later
re-fit with SCBE cards and opened up for 16 XGE active per card.



If you are comfy with DPCs for say normal Inet backbone use (not say CarrierE 
service
termination and similar where the MPC2/MPC-3 features can be critical) and just 
want
capacity I guess your would be fine, barring any remaining "mix bugs".  The 
16XGE is just
a basic port mode card without a whole bunch of extra features.

If on the other hand you are doing edge feeding with hundreds to thousands of 
units per
card, lots of filters, etc...  Mixing MPC and DPC is likely still a very bad 
idea.

-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

It is 120Gbit agg in a SCB system as the limit is 120G/slot.

This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.

So, use 3 of 4 in each bank if you want 100% line rate.

SCBE increases this to 160Gbit or better (depending on age of chassis)
allowing for line rate on all 16.


Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
with, but nobody in their right mind mixes DPCs and MPCs.

On 1/14/16 13:09, Saku Ytti wrote:
> On 14 January 2016 at 23:11, Mark Tinka <mark.ti...@seacom.mu> wrote:
> 
> Hey,
> 
>>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
>>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
>>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio chipset.
> 
>> I wouldn't be too concerned about the throughput on this line card. If
>> you hit such an issue, you can easily justify the budget for better line
>> cards.
> 
> Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.
> 
> Particularly bad combo would be running with DPCE. Basically any
> MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
> the fabric to redundancy mode. But with 16XGE+SCB you probably don't
> want to.
> 
> There is some sort of policing/limit on how many fabric grands MPC
> gives up, like if your trio needs 40Gbps of capacity, and is operating
> in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
> each SCB.
> However as DPCE won't use the third one, DPCE could only send at
> 26.66Gbps towards that 40Gbps trio. While MPC would happily send
> 40Gbps to MPC.
> 


-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

With 2 active SCBs (2+0 non-redundant or 2+1 redundant in a 960) you are 
capable of 120G
per slot.

The worst case throughput in this case would depend on the cards.

For single Trio MPC1 in would be 30Gbit per card (actual 31.7 to ~ 39 depending 
on packet mix)

For dual Trio MPC2 it would be same per half card or 60G total.

For quad trio 16XGE would be 30G per bank of 4 until use SCBE or better than 
40G per bank of 4


I do not recall the specs for those DPCs, as we downchecked all DPCs for other 
reasons and
held off deployment until the release of the MPC2-3D-Q cards.


On 1/14/2016 1:37 PM, Colton Conor wrote:
> So assuming I got with something like this
> setup: 
> http://www.ebay.com/itm/Juniper-MX960BASE-AC-10x-DPC-R-4XGE-XFP-MS-DPC-1yrWrnty-Free-Ship-40x-10G-MX960-/351615023814?hash=item51dde37ac6:g:WUcAAOSwLVZViHc~
> 
> That has RE-2000's, Regular SCBs, and all DPC linecards. Would these all be 
> able to run at
> line rate? 
> 
> On Thu, Jan 14, 2016 at 4:19 PM, Christopher E. Brown 
> <chris.br...@acsalaska.net
> <mailto:chris.br...@acsalaska.net>> wrote:
> 
> 
> It is 120Gbit agg in a SCB system as the limit is 120G/slot.
> 
> This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
> 
> So, use 3 of 4 in each bank if you want 100% line rate.
> 
> SCBE increases this to 160Gbit or better (depending on age of chassis)
> allowing for line rate on all 16.
> 
> 
> Agree, mixing DPC and MPC is a terrible idea.  Don't like DPC to begin
> with, but nobody in their right mind mixes DPCs and MPCs.
> 
> On 1/14/16 13:09, Saku Ytti wrote:
> > On 14 January 2016 at 23:11, Mark Tinka <mark.ti...@seacom.mu 
> <mailto:mark.ti...@seacom.mu>> wrote:
> >
> > Hey,
> >
> >>> A surplus dealer recommended the MPC-3D-16XGE-SFPP-R-B, which is less
> >>> than $1k/port used.  But it only supports full bandwidth on 12 ports,
> >>> would be oversubscribed 4:3 if all 16 are in use.  They are Trio 
> chipset.
> >
> >> I wouldn't be too concerned about the throughput on this line card. If
> >> you hit such an issue, you can easily justify the budget for better 
> line
> >> cards.
> >
> > Pretty sure it's as wire-rate as MPC1, MPC2 on SCBE system.
> >
> > Particularly bad combo would be running with DPCE. Basically any
> > MPC+DPCE combo is bad, but at least on MPC1/MPC2 systems you can set
> > the fabric to redundancy mode. But with 16XGE+SCB you probably don't
> > want to.
> >
> > There is some sort of policing/limit on how many fabric grands MPC
> > gives up, like if your trio needs 40Gbps of capacity, and is operating
> > in normal mode (not redundancy) then it'll give 13.33Gbps of grands to
> > each SCB.
> > However as DPCE won't use the third one, DPCE could only send at
> > 26.66Gbps towards that 40Gbps trio. While MPC would happily send
> > 40Gbps to MPC.
> >
> 
> 
> --
> 
> Christopher E. Brown   <chris.br...@acsalaska.net 
> <mailto:chris.br...@acsalaska.net>> 
>  desk (907) 550-8393 <tel:%28907%29%20550-8393>
>  cell (907) 632-8492
> <tel:%28907%29%20632-8492>
> IP Engineer - ACS
> ------------
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> <mailto:juniper-nsp@puck.nether.net>
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> 


-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

In the datasheets for the 16XGE since release Juniper specifically called out 
120G due to
bus limit with SCB.

Not in the sheet but actual is guaranteed 30G per bank of 4.

We used first 4 of each bank in heavy core use and behavior is as specified.

With change to SCBE this limit is removed and _all_ 16 ports are line rate 
capable at same
time.

The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the MPC1 
and 2 cards
but the quad trio interconnect in the 16XGE is wired up diff with additional 
helpers and
can do the full 40G per bank.

There is no increase in usable capacity with the MPC1-3D and MPC2-3D cards with 
the SCBE
though, still 30G.


I did not have gear for a 16XGE full test, but after SCBE upgrade I did 
specifically test
at full duplex line rate on all 4 ports in a bank on one card to all 4 ports in 
a bank on
a second card in the same chassis.

On 1/14/2016 2:13 PM, Adam Vitkovsky wrote:
>> Christopher E. Brown
>> Sent: Thursday, January 14, 2016 10:20 PM
>> To: juniper-nsp@puck.nether.net
>> Subject: Re: [j-nsp] MX960 with 3 RE's?
>>
>>
>> It is 120Gbit agg in a SCB system as the limit is 120G/slot.
>>
>> This is in the form of 30Gbit per TRIO and 1 TRIO per 4 ports.
>>
>> So, use 3 of 4 in each bank if you want 100% line rate.
>>
>> SCBE increases this to 160Gbit or better (depending on age of chassis)
>> allowing for line rate on all 16.
>>
> So the 16XGE cards don’t have the limitation of max 38Gbps between Trio and 
> Fabric please?
> 
> 
> adam
> 
> 
> Adam Vitkovsky
> IP Engineer
> 
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
> 
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
> this email are confidential to the ordinary user of the email address to 
> which it was addressed. This email is not intended to create any legal 
> relationship. No one else may place any reliance upon it, or copy or forward 
> all or any of it in any form (unless otherwise notified). If you receive this 
> email in error, please accept our apologies, we would be obliged if you would 
> telephone our postmaster on +44 (0) 808 178 9652 or email 
> postmas...@gamma.co.uk
> 
> Gamma Telecom Limited, a company incorporated in England and Wales, with 
> limited liability, with registered number 04340834, and whose registered 
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of 
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
> 
> 
> 


-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX960 with 3 RE's?

2016-01-14 Thread Christopher E. Brown

We actually spent a couple years spooling up for a greenfield build of a 
network required
to provide very tight SLAs on a per service bases.

We were working with the MX BU directly for almost 2 years and constantly 
exchanging test
results and loaner/testing cards.

Per specs provided by the designers of the MPC2 and verified by our testing the 
worst case
throughput per trio for a MPC2-3D per or 3D-Q is 31.7G, that is full duplex 
with every
packet going to or from bus and every packet being min length. That is actual 
packet,
after all internal overhead.

Larger packets bring this closer but not all the way to 40G, and traffic 
hairpinning on
same trio can take all the way to 40G.

We did not investigate any impact of MCAST replication on this as MCAST 
replication is not
part of the use case for us.


When the same folks were asked about the 16XGE card and the 120G (and later 
160G)
performance it was indicated that there was an additional layer of logic/asics 
used to tie
all 4 trios in the 16XGE to the bus and that these ASICs offloaded some of the 
bus related
overhead handling from the TRIOs, freeing up enough capacity to allow each TRIO 
in the
16XGE to provide a full 40G duplex after jcell/etc overhead.


On 1/14/2016 3:27 PM, Saku Ytti wrote:
> On 15 January 2016 at 01:39, Christopher E. Brown
> <chris.br...@acsalaska.net> wrote:
>> The 30Gbit nominal (actual 31.7 or greater) limit per trio applies to the 
>> MPC1 and 2 cards
>> but the quad trio interconnect in the 16XGE is wired up diff with additional 
>> helpers and
>> can do the full 40G per bank.
> 
> 
> Pretty sure this is not true. There are three factors in play
> 
> a) memory bandwidth of trio (mq)
> b) lookup performance of trio (lu)
> c) fabric capacity
> 
> With SCBE 16XGE isn't limited by fabric, but it's still limited by
> memory bandwidth and lookup performance.
> 
> Memory bandwidth is tricksy, due to how packets are split into cells,
> if you have unlucky packet sizes, you're wasting memory bandwidth
> sending padding, and can end up having maybe even that 30Gbps
> performance, perhaps even less.
> If you have lucky packet sizes, and you can do 40Gbps. For me, I work
> with half-duplex performance of 70Gbps, and that's been safe bet for
> me, in real environments.
> 
> Then there is lookup performance, which is like 50-55Mpps, out of
> 60Mpps of maximum possible.
> 
> 
> 
> Why DPCE=>MPC is particularly bad, is because even though MQ could
> accept 20Gbps+20Gbps from the fabrics DPCE is sending, it's not, it's
> only accepting 13+13, but DPCE isn't using the last fabric to send the
> remaining 13Gbps.
> So if your traffic flow from single DPCE card (say 4x10GE LACP) is out
> to single trio, you'll be limited to just 26Gbps out of 40Gbps.
> 
> If your chassis is SCBE or just MPC1/MPC2 you can 'set chassis fabric
> chassis redundancy-mode redundant', which causes MPC to use only two
> of the fabrics, in which case it'll accept 20+20 from DPCE, allowing
> you to get that full 40Gbps from DPCE.
> But this also means, if you also have 16XGE and SCB, you can pretty
> much use just half of the ports of 16XGE.
> 
> If this is confusing, just don't run mixed box, or buy SCBE and run
> redundant-mode.
> 
> 
> 


-- 

Christopher E. Brown   <chris.br...@acsalaska.net>   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] transmit-rate percent shaping-rate working together

2015-06-19 Thread Christopher E. Brown
On 6/19/15 07:00, Adam Vitkovsky wrote:
 Hi Folks,
 
 Is there a way to make the set class-of-service schedulers Class_1 
 transmit-rate percent 20
 to actually use class-of-service interfaces ge-1/3/0 unit 0 shaping-rate 4m
 when allocating BW for the class please?
 
 Thank you very much
 
 adam

A percentage based transmit rate on a scheduler when applied via map to
a logical unit with a shaping rate should be the set percentage of the
shaping rate, not the native interface rate. (in this case, transmit
rate would be 800,000bps)


I am assuming (based on the configuration example) that you are in
per-unit mode with a per unit capable platform/linecard even though the
example is unit 0.

It should work the same with port mode only platforms, but the shaper
would be on the port, not the port/unit



-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper MX104

2013-11-12 Thread Christopher E. Brown
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LACP/LAG

2013-10-17 Thread Christopher E. Brown

Unless you like losing traffic or looping traffic stay away from unconditional 
channeling.

There are a number of situations ranging from a failing linecard/port to a 
simple
misconfig that will leave the links up but not properly functional.


Possible issues...

All traffic going down link X is lost...

Far end thinks link1 and link2 are normal ports and creates a loop condition...


etc..


Use active signaling wherever possible...  Fast timing whereyou have good nodes 
doing
hardware offload, slow timing with less ideal boxen.


BTW: with LACP, Active v.s. Passive has to do with if we initiate LACP traffic, 
LACP v.s.
fixed on channeling is something else.

On 10/17/2013 1:00 PM, Keith wrote:
 Hi.
 
 Any reason not to run LACP on a LAG link?
 
 Setting up a new LAG with some gear on our MX and have setup the AE
 interface and turned it up, but have not actually cut traffic over to it yet.
 
 They were saying run in passive or no LACP, with it just
 
 On cisco one does: channel-group x mode on/active/passive and I
 did have trouble getting both sides to come up properly on the cisco, one
 side would always come up suspended until I set the ports to just on.
 
 I have done the same on the MX side, except I did not use 
 aggregated-ether-options lacp
 and just left that statement out.
 
 Both sides came up on the MX and it looks ok. Am I going to get bitten in the 
 ass
 at some point for not running LACP?
 
 Docs I see always configure LACP for LAGs, and mention not running it but 
 never really
 why you would.
 
 Thanks,
 Keith
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Output queue drops and temporal buffers

2013-10-04 Thread Christopher E. Brown
On 10/1/2013 11:12 AM, John Neiberger wrote:
 
 My question regarding the config is this: if we are already setting the
 transmit rate percentage, why would we also configure a temporal
 allocation? Isn't a percentage of the available bandwidth sufficient? What
 does adding a temporal allocation buy us? Is the transmit rate percent just
 setting how often that queue is serviced, while the buffer-size configures
 the queue depth?
 
 Thanks,
 John


transmit rate is how fast you empty the buffer (if not constrained and excess 
avail may
empty faster)

Buffer size is just that.


If you do not set IIRC the available buffer is divided among the queues by 
transmit rate.

This is fine, but the available buffer may be very large compared to how much 
delay you
want to allow in a class.


EF traffic should not be buffered more than a very small amount, if you cannot 
send in 2ms
or less you really need to drop it.

Even for the BE queue, how much latency climb do you want to allow before you 
start
dropping?  200ms is insane, 10ms is too small, something in the 40 - 80 ms 
range would be
better (or maybe much larger on a small long haul pipe).


Now, you can get all this by setting explicit buffer sizes, BUT those sizes are 
tailored
toa specific transmit rate.

What if you want to run the same core maps on OC3, OC12, 1GE, 10GE?  You need 
to set
transmit rate percent and buffer temporal...  Scales to the actual rate to 
provide just
enough but not too much buffering.


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Problem to test VPLS between two Site

2013-07-17 Thread Christopher E. Brown
  : 0
  Flooded bytes: 0
  Unicast packets  : 0
  Unicast bytes: 0
  Current MAC count:  0
 
 
 it would be nice please suggest based on this config how to test  this layer 
 2 vpn using PC?
 
 
 
 Regards
 Jahangir Hossain
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RSVP Sessions

2013-04-18 Thread Christopher E. Brown

We are running a stats interval of 5 minutes, adjust thresh of 5% limit
3 and an optimize timer of 3600.


In other words...  Look for a better path once per hour, if three 5
minute intervals are  5% over auto-bw LSP reservation attempt to adjust
LSP reservation upwards and re-path if needed.


Been running that way on M and now MX since about 2006

On 4/18/13 2:08 AM, Mohammad Salbad wrote:
 Dear Mohammad
 
  
 
 Please refer to the below link
 
 http://www.juniper.net/techpubs/en_US/junos12.3/topics/reference/configurati
 on-statement/optimize-timer-edit-protocols-mpls.html
 
  
 
 Also please consider the following:
 
 The optimize timer is a timer that starts @ a certain time and runs
 frequently on the period you define, for example if you set the optimize
 timer to 3600 seconds, it will run for example @ 2:15 pm the next time it
 will check the best paths will be @ 3:15 pm, and it does not run when there
 is a change in lsp paths, when it runs it see if there are lsp changes or if
 there are suboptimal path and returns to network to best paths selection,
 and it relies on your IGP protocol ospf/isis.
 
 Another option would be smart optimize timer, which respond immediately to
 network changes as below link:
 
 https://www.juniper.net/techpubs/en_US/junos10.0/information-products/topic-
 collections/config-guide-mpls-applications/mpls-configuring-the-smart-optimi
 ze-timer.html
 
  
 
 you need also to take to be cautious when defining the optimize timer
 interval since it consumes the resources, I would suggest setting it to a
 value above 1 hour.
 
  
 
 M. Salbad 
 
  
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Different media in an AE cause framing issues?

2013-04-03 Thread Christopher E. Brown

Agg ether happens at the packet layer, each is still its own link at L1
and proper L2...


The modules, the cable batch, the specific install of that set of
cables, something else.  The only thing the AE has to do with it is
queueing the traffic...


On 4/2/13 9:51 AM, Morgan McLean wrote:
 Phil,
 
 I can't imagine that BOTH connections I added to the AE are bad, whether it
 be SFP related or Fiber related. And I'm using the same fiber and SFP's out
 to the cabinets (with no issues as of yet), which leaves me with the DAC +
 SFP on an AE is a bad idea.
 
 Ben,
 
 Ya I'm thinking thats my next step, only problem is this is in production,
 and I would really hate to bring up an AE that drops packets out the ass. I
 think people would notice that one...Might be able to test it in a lab if I
 have the required stuff here.
 
 Morgan
 
 
 On Tue, Apr 2, 2013 at 1:29 AM, Phil Mayers p.may...@imperial.ac.uk wrote:
 
 On 04/01/2013 08:52 PM, Morgan McLean wrote:

  So at this point I'm wondering if mixing the fiber and DAC connections is
 a
 nono? Due to timing issues? Not sure how that would screw up the frame,
 but
 maybe somebody has more insight.


 That seems like a really complicated explanation; why would you assume
 that, rather than a fibre problem e.g. bad splice, dirty connector, etc? Or
 even bad SFP+?

 I'm unfamiliar with the EX platforms, but it would seem really odd if they
 had this kind of tight inter-port timing constraints just because they're
 in an aggregate; what about diverse fibre paths with radically different
 lengths, which are a common enough config?
 __**_
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/**mailman/listinfo/juniper-nsphttps://puck.nether.net/mailman/listinfo/juniper-nsp

 
 
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 port numbering

2013-03-15 Thread Christopher E. Brown
On 3/15/13 2:41 AM, Sebastian Wiesinger wrote:
 Has anyone here an easily understandable graphic for port numbering on
 MX80 mic slot(s)? I can't get it right half of the time and support
 staff on-site never knows which port is which. Even the labels on the
 box are not really helpful.
 
 Regards
 
 Sebastian

Simple trick if you are used to MPC based 960s...  Rotate the chassis 90
degrees clockwise.


In normal orientation

Built in 10G are slot 0, left to right xe-0/0/0-3

Left MIC if loaded with 20x1G

BOTTOM row is left to right ge-1/0/0-9
TOP row is left to right ge-1/1/0-9

Left MIC if loaded with 2x10G
left, xe-1/0/0
right, xe-1/1/0


Right MIC if loaded with 20x1G

BOTTOM row is left to right ge-1/2/0-9
TOP row is left to right ge-1/3/0-9

Right MIC if loaded with 2x10G
left, xe-1/2/0
right, xe-1/3/0


Like I said, just like a MPC2 loaded in slot 1 of a 960, just tipped on
its side.


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] If you are going to set egress-shaping-overhead on a TRIO based MX, READ THIS

2013-03-06 Thread Christopher E. Brown

So, TRIO cards have ability to set a overhead offset for their shapers.

By default the offset is +24 bytes to account for preamble, frame start
and FCS.

This is a good thing, as the shapers now match actual L1 rates.

You can change this if needed if what you actually need to do is set
target rate for a service operation at a different layer.  For example,
if using the shapers for rate control of MEF compliant services over a
STAGged last mile you might set it to -24, offsetting preample, frame
start and the STAG, only counting the full L2 fram including FCS.


Important note...

The setting is per PIC, but changing the setting for any PIC on a TRIO
resets the whole FPC.

More important note...

While you can make the change to any or all of the PICs on a FPC, if you
make the change on _all FPCs_ in the box at the same time the system
will never come back without resetting the FPCs manually a second time.

Found this out the hard way, and retested on a second system, same results.

Doing the change one FPC at a time (waiting for the FPC to come back and
start forwarding before doing the next) seems to work fine, on my third
system using that method and have not had to fall back to out of band modem.

-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-26 Thread Christopher E. Brown

If you oversubscribe the bus you oversubscribe the bus.  Traffic load
v.s. average packet size v.s. level of oversubscrip v.s. burstiness will
determine loss potential.

On 2/26/13 11:59 AM, Matt Bentley wrote:
 Thanks very much.  So confirm, there is nothing that says one PIC will
 get a certain amount at minimum right?  
 
 What determines what is dropped when there is contention on the bus?
  Are there any commands I could use to see whether a bus is/was
 congested and how much of what was dropped?
 
 On Sat, Feb 23, 2013 at 9:37 PM, Christopher E. Brown
 chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:
 
 
 Bus is _shared_, with CFEB you have guaranteed 3.2Gbit shared by up to 4
 PICs, with E-CFEB is non issue single PIC limit is 1G and E-CFEB will do
 full 1G per no matter what.
 
 If you try to handle more than 3.2Gbit on a CFEB bus (X-0/X/X or
 X-1/X/X) you may see bus contention depending on packet size.
 
 Load 4xGE and maybe.  Load 3xGE + 4xDS3 is pushing limit but OK.
 
 With E-CFEB, non issue.
 
 With CFEB summ the bandwith make make sure is 3200Mbit or less, and 3200
 is shared by all 4 PICs.
 
 
 On 2/23/13 6:51 PM, Matt Bentley wrote:
  Thanks!  So it would be correct to say you should NEVER see
  oversubscription on a channelized DS3 card right?  Obviously, you can
  overdrive a single T1, but you'd never see drops due to the PIC
 itself?
   I guess what I'm asking is whether or not the bandwidth
 availalble on a
  FPC is allocated equally per PIC, or if everyone sort of shares it.
 
  On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown
  chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net wrote:
 
 
  With the std cfeb after internal overhead per bus capacity is
 3.2Gbit of
  traffic, this is worst case minimum small packets, etc.
 
  Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.
 
  Unless you are doing all small packet, actual limit is higher
 than 3.2.
 
  Enhanced CFEB bumps the raw bus capacity to something around
 5Gbit, and
  the after all overheads forwarding capacity to 4Gbit (based on
 the 1G
  per PIC limit).
 
  Summ...
 
  CFEB
  Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small
 packet_
 
  E-CFEB
  Up to 1Gbit per PIC
 
 
  These figures are
  On 2/23/13 6:01 PM, Matt Bentley wrote:
   OK - so there has been a lot of discussion around this that I've
  seen, but
   I've searched for hours and still can't find concrete answers.
   Can someone
   help?
  
   1.  Does the 3.2 Gbps throughput limitation include
 overhead?  In
  other
   words, Is the raw throughput 4 Gbps with effective
 throughput of 3.2
   Gbps?  Or is it 3.2 Gbps of raw throughput with effective
  throughput of 2.5
   Gbps?
   2.  Is this throughput per PIC on the FPC?  So let's say I have
  three 4x
   GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
  bandwidth
   allocated equally between them?  So is it 800 Mbps per PIC, and
  the PICs
   can't steal bandwidth from another one?
   3.  Where, and based on what, is traffic dropped with  Juniper
  head of line
   blocking (ie where multiple high speed input interfaces try
 to go
  out the
   same lower speed exit interface)?
  
   Thanks very much!
   ___
   juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  mailto:juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/juniper-nsp
  
 
 
  --
 
 
  Christopher E. Brown   chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net
  mailto:chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net   desk (907) 550-8393
 tel:%28907%29%20550-8393
  tel:%28907%29%20550-8393
   cell (907)
  632-8492 tel:%28907%29%20632-8492
  IP Engineer - ACS
 
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  mailto:juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  https://puck.nether.net

Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-23 Thread Christopher E. Brown

With the std cfeb after internal overhead per bus capacity is 3.2Gbit of
traffic, this is worst case minimum small packets, etc.

Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.

Unless you are doing all small packet, actual limit is higher than 3.2.

Enhanced CFEB bumps the raw bus capacity to something around 5Gbit, and
the after all overheads forwarding capacity to 4Gbit (based on the 1G
per PIC limit).

Summ...

CFEB
Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small packet_

E-CFEB
Up to 1Gbit per PIC


These figures are
On 2/23/13 6:01 PM, Matt Bentley wrote:
 OK - so there has been a lot of discussion around this that I've seen, but
 I've searched for hours and still can't find concrete answers.  Can someone
 help?
 
 1.  Does the 3.2 Gbps throughput limitation include overhead?  In other
 words, Is the raw throughput 4 Gbps with effective throughput of 3.2
 Gbps?  Or is it 3.2 Gbps of raw throughput with effective throughput of 2.5
 Gbps?
 2.  Is this throughput per PIC on the FPC?  So let's say I have three 4x
 GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get bandwidth
 allocated equally between them?  So is it 800 Mbps per PIC, and the PICs
 can't steal bandwidth from another one?
 3.  Where, and based on what, is traffic dropped with  Juniper head of line
 blocking (ie where multiple high speed input interfaces try to go out the
 same lower speed exit interface)?
 
 Thanks very much!
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M10i FPC PIC Throughput Questions

2013-02-23 Thread Christopher E. Brown

Bus is _shared_, with CFEB you have guaranteed 3.2Gbit shared by up to 4
PICs, with E-CFEB is non issue single PIC limit is 1G and E-CFEB will do
full 1G per no matter what.

If you try to handle more than 3.2Gbit on a CFEB bus (X-0/X/X or
X-1/X/X) you may see bus contention depending on packet size.

Load 4xGE and maybe.  Load 3xGE + 4xDS3 is pushing limit but OK.

With E-CFEB, non issue.

With CFEB summ the bandwith make make sure is 3200Mbit or less, and 3200
is shared by all 4 PICs.


On 2/23/13 6:51 PM, Matt Bentley wrote:
 Thanks!  So it would be correct to say you should NEVER see
 oversubscription on a channelized DS3 card right?  Obviously, you can
 overdrive a single T1, but you'd never see drops due to the PIC itself?
  I guess what I'm asking is whether or not the bandwidth availalble on a
 FPC is allocated equally per PIC, or if everyone sort of shares it.
 
 On Sat, Feb 23, 2013 at 8:13 PM, Christopher E. Brown
 chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:
 
 
 With the std cfeb after internal overhead per bus capacity is 3.2Gbit of
 traffic, this is worst case minimum small packets, etc.
 
 Raw bus capacity is IIRC ~ 4Gbit, difference is overhead.
 
 Unless you are doing all small packet, actual limit is higher than 3.2.
 
 Enhanced CFEB bumps the raw bus capacity to something around 5Gbit, and
 the after all overheads forwarding capacity to 4Gbit (based on the 1G
 per PIC limit).
 
 Summ...
 
 CFEB
 Up to 1Gbit per PIC, 3.2Gbit per bus _worst case small packet_
 
 E-CFEB
 Up to 1Gbit per PIC
 
 
 These figures are
 On 2/23/13 6:01 PM, Matt Bentley wrote:
  OK - so there has been a lot of discussion around this that I've
 seen, but
  I've searched for hours and still can't find concrete answers.
  Can someone
  help?
 
  1.  Does the 3.2 Gbps throughput limitation include overhead?  In
 other
  words, Is the raw throughput 4 Gbps with effective throughput of 3.2
  Gbps?  Or is it 3.2 Gbps of raw throughput with effective
 throughput of 2.5
  Gbps?
  2.  Is this throughput per PIC on the FPC?  So let's say I have
 three 4x
  GigE IQ2 PICs and one channelized DS3 IQ PIC.  Does each PIC get
 bandwidth
  allocated equally between them?  So is it 800 Mbps per PIC, and
 the PICs
  can't steal bandwidth from another one?
  3.  Where, and based on what, is traffic dropped with  Juniper
 head of line
  blocking (ie where multiple high speed input interfaces try to go
 out the
  same lower speed exit interface)?
 
  Thanks very much!
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 
 --
 
 Christopher E. Brown   chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net   desk (907) 550-8393
 tel:%28907%29%20550-8393
  cell (907)
 632-8492 tel:%28907%29%20632-8492
 IP Engineer - ACS
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPLS and QoS at penultimate hop ?

2013-02-05 Thread Christopher E. Brown

The table-lable bit is the one time I am aware of that the forwarding
class is changed in the middle without the system being specifically
setup to do so.


Assuming that you have mpls and ipv4 classifiers on all interfaces.

Incoming is classified on the label, label pop, outgoing is transmitted
as same class.  If you have a ipv4 rewrite in place it would kick in.





On 2/5/13 12:31 AM, Alexandre Snarskii wrote:
 On Mon, Feb 04, 2013 at 09:36:10AM -0900, Christopher E. Brown wrote:

 The packet is classified on input.

 *UNLESS* you use table-label in a l3vpn, then it gets re-classified
 after the label POP.
 
 Intresting. However, my question is not about l3vpn, it is about plain 
 old IPv4 packet with one MPLS header entering PHP router programmed to Pop 
 this header and forward this packet towards next (egress) router..
 
 Example: 
 
 ingress router:
 
 AA.BBB.205.0/25*[BGP/170] 6w1d 22:28:33, MED 100, localpref 200, from 
 AA.BBB.224.5
  to AA.BBB.233.133 via ae2.6, Push 604050
 
 (BA-classification is done on ipprec/dscp of ingress packet)
 
 PHP router: 
 
 604050 *[LDP/9] 4d 00:37:22, metric 1
  to AA.BBB.233.13 via ae4.9, Pop  
 604050(S=0)*[LDP/9] 4d 00:37:22, metric 1
  to AA.BBB.233.13 via ae4.9, Pop  
 
 (BA-classification is most possibly done on ip-prec after Pop is done, 
 but that's the point where I'm not sure Classification on EXP may be 
 more accurate: as there are no payload type field in MPLS header, this 
 router just can't know that packet's content is IPv4).
 
 
 On 2/3/13 6:03 PM, Chris Kawchuk wrote:
 It was my understanding that the label was logically popped on
 Egress (in terms of how one would envision the packet flow); hence
 the outer label EXP bits were evaluated by the BA classifier on
 ingress properly. (Whether it's popped on ingress, yet evaluated
 prior-to-pop is a mechanics thing..)

 But yes, I have no documentation I can point to; off the top of my
 head that the above is indeed true. I would be interested to know for
 future information sake that the JNPR box is indeed doing the right
 thing so to speak? i.e. since the PIC/MPC pops the Layer-2
 information as well, it needs to be able to read the 802.1P bits, if
 there is a 1p-VLAN tag BA applied to the interface. Hopefully we're
 also reading the outer EXP label at the same time; as this would make
 sense.

 i.e. if you're doing VLAN pop/swapping, you'd need to retain any
 BA-p-tag specific classification there as well, prior to any tag
 manipulation.

 400 quatloos says it's done on ingress before the label is popped.

 - Ck.



 Simple question I'm not able to find answer for: what is the order
  of label pop operation and BA classification on penultimate router
 ?

 I have a gut feeling that label is stripped first and then BA 
 classification is done on a naked packet, f.e., ipprec-based in
 case of IP packet, without taking (already stripped) EXP bits into
  account, but I can't find any documentation proving or disproving
 it...


 ___ juniper-nsp mailing
 list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp



 -- 
 
 Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
  cell (907) 632-8492
 IP Engineer - ACS
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPLS and QoS at penultimate hop ?

2013-02-04 Thread Christopher E. Brown


The packet is classified on input.

*UNLESS* you use table-label in a l3vpn, then it gets re-classified
after the label POP.

On 2/3/13 6:03 PM, Chris Kawchuk wrote:
 It was my understanding that the label was logically popped on
 Egress (in terms of how one would envision the packet flow); hence
 the outer label EXP bits were evaluated by the BA classifier on
 ingress properly. (Whether it's popped on ingress, yet evaluated
 prior-to-pop is a mechanics thing..)
 
 But yes, I have no documentation I can point to; off the top of my
 head that the above is indeed true. I would be interested to know for
 future information sake that the JNPR box is indeed doing the right
 thing so to speak… i.e. since the PIC/MPC pops the Layer-2
 information as well, it needs to be able to read the 802.1P bits, if
 there is a 1p-VLAN tag BA applied to the interface. Hopefully we're
 also reading the outer EXP label at the same time; as this would make
 sense.
 
 i.e. if you're doing VLAN pop/swapping, you'd need to retain any
 BA-p-tag specific classification there as well, prior to any tag
 manipulation.
 
 400 quatloos says it's done on ingress before the label is popped.
 
 - Ck.
 
 
 
 Simple question I'm not able to find answer for: what is the order
  of label pop operation and BA classification on penultimate router
 ?
 
 I have a gut feeling that label is stripped first and then BA 
 classification is done on a naked packet, f.e., ipprec-based in
 case of IP packet, without taking (already stripped) EXP bits into
  account, but I can't find any documentation proving or disproving
 it...
 
 
 ___ juniper-nsp mailing
 list juniper-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] instance-specific filters for VPLS BUM/flood filtering

2012-12-21 Thread Christopher E. Brown


Well, I just re-tested this in

10.4R9
10.4R10
10.4R11
10.4R12
11.4R6

On MX960 RE2000/MPC2 and MX80

In all cases, set to network-services ip or network-services enhanced-ip 
(with a
reboot between to actually switch) I always see a single filter and policer set 
shared
across multiple instances.


I just opened a case and cited the closed PR and bogus/unsolved.


On 11/14/2012 11:08 AM, Christopher E. Brown wrote:
 
 Except I am running network-services ip not enhanced-ip, and 10.4R10 now
 R11 (PR lists R9 as fixed) and am seeing shared policers.
 
 
 
 On 11/14/12 8:19 AM, Addy Mathur wrote:
 Folks:

 When Trio MPCs were released, original behavior pertaining to policer
 behavior on VPLS instances was different from that observed on I-CHIP
 DPCs (as has been uncovered in this thread).  This was changed via
 PR/674408, which should now be externally viewable.  It changes the
 default Trio MPC behavior to be more in line with I-CHIP DPC default.

 https://prsearch.juniper.net/InfoCenter/index?page=prcontentid=PR674408

 Regards,
 Addy.

 On Fri, Nov 9, 2012 at 2:57 PM, Christopher E. Brown
 chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:


 Please share case #, I have same complaints in discussion with our SE
 and up that chain.

 Personally I think they need to add instance-specific as a keyword to
 the policer to make them shared or not-shared by choice.  95% of the
 time I need unshared, but can think of a few cases where shared sould be
 useful.


 On 11/8/12 7:06 AM, Saku Ytti wrote:
 
  In my mind, the default is fine.  It is consistent with normal
 behavior
  and there are times when a shared policer would be desired.  The
 lack of
  a instance specific option though, that is stupid beyond belief,
  shocking surprise.
 
  To me the biggest problem is, you cannot know if instance
 policers are
  shared or not, as it is version dependent.
 
  I opened JTAC case (I can unicast case# if you want to pass it to your
  account team).
 
  Query:
  
  Case A)
 
  # show firewall filter PROTECT-FROM_IP_OPTION
 
  term police-ip-options {
 
  from {
 
  ip-options any;
 
  }
 
  then {
 
  policer POLICE-IP_OPTIONS;
 
  count police-ip-options;
 
  }
 
  }
 
  term accept-all {
 
  then {
 
  count accept-all;
 
  accept;
 
  }
 
  }
 
 
 
  # show firewall policer POLICE-IP_OPTIONS
 
  if-exceeding {
 
  bandwidth-limit 3m;
 
  burst-size-limit 320;
 
  }
 
  then discard;
 
 
 
  set routing-instances RED forwarding-options family inet filter
  PROTECT-FROM_IP_OPTION
 
  set routing-instances BLUE forwarding-options family inet filter
  PROTECT-FROM_IP_OPTION
 
 
 
  Will RED and BLUE share 3Mbps, or will each get own 3Mbps?
 
 
 
 
 
  Case B)
 
 
 
  ...amily vpls filter PROTECT-UNKNOWN_UNICAST
 
 
  term unknown_unicast {
 
  from {
 
  traffic-type unknown-unicast;
 
  }
 
  then {
 
  policer POLICE-UNKNOWN_UNICAST;
 
  accept;
 
  }
 
  }
 
  term accep {
 
  then accept;
 
  }
 
 
 
  show configuration firewall policer POLICE-UNKNOWN_UNICAST
 
  if-exceeding {
 
  bandwidth-limit 42m;
 
  burst-size-limit 100k;
 
  }
 
  then discard;
 
 
 
  set routing-instances GREEN forwarding-options family vpls filter
 input
  PROTECT-UNKNOWN_UNICAST
 
  set routing-instances YELLOW forwarding-options family vpls filter
 input
  PROTECT-UNKNOWN_UNICAST
 
 
 
  Will GREEN, YELLOW share 42Mbps or get own 42Mbps policers?
  
 
 
 
  JTAC response
  
  Query: If you configure same FW with policer to multiple
 instances, what is expected result? Should policer be shared or
 should it be dedicated per instances?
  JTAC: It will be dedicated per instance. In your example RED and
 BLUE will consume 3MB independently.
  ---
 
 
 
 
  But as per my own testing, I know IP-OPTIONS policer was shared in
 10.4 (which
  is what I want for IP options). And VPLS policer I want
 not-shared, as in 11.4.
 
 


 --
 
 Christopher E. Brown   chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net   desk (907) 550-8393
 tel:%28907%29%20550-8393
  cell (907)
 632

Re: [j-nsp] switch idea.?

2012-12-06 Thread Christopher E. Brown
On 12/6/12 8:14 AM, Saku Ytti wrote:
 On (2012-12-06 09:00 -0800), Michael Loftis wrote:
 
 The biggest thing I miss over Cisco is VTP.  Managing VLAN's is a huge pain
 without it when you've got dozens of switches that all need the same VLAN
 
 VTP has ups and downs. Many people have broken network or two with VTP in
 their time.
 If your switch supports 4k VLAN ID, then pre-provisioning all VLAN at
 installation time is in my opinion obvious correct solution.
 
 If you support less than 4k but you need to support arbitrary ID value then
 VTP can avoid need for VLAN provisioning system.

Yes, vtp v1 and v2 are just plain evil when it comes to large scale
networks.


What is needed is an improved VTPv3 equiv.  Set group name, set group
password, set client or servers.  Only master can update and master is
only master after election.  And none of that auto-discover domain/pass
trash.

In other words, explicit working control of what is part of the group
and who can actually update.  On those terms it works well, but vtpv3 is
only available on a limited # of Crisco platforms, and I have never seen
a good non-crisco equiv.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] instance-specific filters for VPLS BUM/flood filtering

2012-11-14 Thread Christopher E. Brown

Except I am running network-services ip not enhanced-ip, and 10.4R10 now
R11 (PR lists R9 as fixed) and am seeing shared policers.



On 11/14/12 8:19 AM, Addy Mathur wrote:
 Folks:
 
 When Trio MPCs were released, original behavior pertaining to policer
 behavior on VPLS instances was different from that observed on I-CHIP
 DPCs (as has been uncovered in this thread).  This was changed via
 PR/674408, which should now be externally viewable.  It changes the
 default Trio MPC behavior to be more in line with I-CHIP DPC default.
 
 https://prsearch.juniper.net/InfoCenter/index?page=prcontentid=PR674408
 
 Regards,
 Addy.
 
 On Fri, Nov 9, 2012 at 2:57 PM, Christopher E. Brown
 chris.br...@acsalaska.net mailto:chris.br...@acsalaska.net wrote:
 
 
 Please share case #, I have same complaints in discussion with our SE
 and up that chain.
 
 Personally I think they need to add instance-specific as a keyword to
 the policer to make them shared or not-shared by choice.  95% of the
 time I need unshared, but can think of a few cases where shared sould be
 useful.
 
 
 On 11/8/12 7:06 AM, Saku Ytti wrote:
 
  In my mind, the default is fine.  It is consistent with normal
 behavior
  and there are times when a shared policer would be desired.  The
 lack of
  a instance specific option though, that is stupid beyond belief,
  shocking surprise.
 
  To me the biggest problem is, you cannot know if instance
 policers are
  shared or not, as it is version dependent.
 
  I opened JTAC case (I can unicast case# if you want to pass it to your
  account team).
 
  Query:
  
  Case A)
 
  # show firewall filter PROTECT-FROM_IP_OPTION
 
  term police-ip-options {
 
  from {
 
  ip-options any;
 
  }
 
  then {
 
  policer POLICE-IP_OPTIONS;
 
  count police-ip-options;
 
  }
 
  }
 
  term accept-all {
 
  then {
 
  count accept-all;
 
  accept;
 
  }
 
  }
 
 
 
  # show firewall policer POLICE-IP_OPTIONS
 
  if-exceeding {
 
  bandwidth-limit 3m;
 
  burst-size-limit 320;
 
  }
 
  then discard;
 
 
 
  set routing-instances RED forwarding-options family inet filter
  PROTECT-FROM_IP_OPTION
 
  set routing-instances BLUE forwarding-options family inet filter
  PROTECT-FROM_IP_OPTION
 
 
 
  Will RED and BLUE share 3Mbps, or will each get own 3Mbps?
 
 
 
 
 
  Case B)
 
 
 
  ...amily vpls filter PROTECT-UNKNOWN_UNICAST
 
 
  term unknown_unicast {
 
  from {
 
  traffic-type unknown-unicast;
 
  }
 
  then {
 
  policer POLICE-UNKNOWN_UNICAST;
 
  accept;
 
  }
 
  }
 
  term accep {
 
  then accept;
 
  }
 
 
 
  show configuration firewall policer POLICE-UNKNOWN_UNICAST
 
  if-exceeding {
 
  bandwidth-limit 42m;
 
  burst-size-limit 100k;
 
  }
 
  then discard;
 
 
 
  set routing-instances GREEN forwarding-options family vpls filter
 input
  PROTECT-UNKNOWN_UNICAST
 
  set routing-instances YELLOW forwarding-options family vpls filter
 input
  PROTECT-UNKNOWN_UNICAST
 
 
 
  Will GREEN, YELLOW share 42Mbps or get own 42Mbps policers?
  
 
 
 
  JTAC response
  
  Query: If you configure same FW with policer to multiple
 instances, what is expected result? Should policer be shared or
 should it be dedicated per instances?
  JTAC: It will be dedicated per instance. In your example RED and
 BLUE will consume 3MB independently.
  ---
 
 
 
 
  But as per my own testing, I know IP-OPTIONS policer was shared in
 10.4 (which
  is what I want for IP options). And VPLS policer I want
 not-shared, as in 11.4.
 
 
 
 
 --
 
 Christopher E. Brown   chris.br...@acsalaska.net
 mailto:chris.br...@acsalaska.net   desk (907) 550-8393
 tel:%28907%29%20550-8393
  cell (907)
 632-8492 tel:%28907%29%20632-8492
 IP Engineer - ACS
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 mailto:juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] instance-specific filters for VPLS BUM/flood filtering

2012-11-09 Thread Christopher E. Brown

Please share case #, I have same complaints in discussion with our SE
and up that chain.

Personally I think they need to add instance-specific as a keyword to
the policer to make them shared or not-shared by choice.  95% of the
time I need unshared, but can think of a few cases where shared sould be
useful.


On 11/8/12 7:06 AM, Saku Ytti wrote:
 
 In my mind, the default is fine.  It is consistent with normal behavior
 and there are times when a shared policer would be desired.  The lack of
 a instance specific option though, that is stupid beyond belief,
 shocking surprise.

 To me the biggest problem is, you cannot know if instance policers are
 shared or not, as it is version dependent.
 
 I opened JTAC case (I can unicast case# if you want to pass it to your
 account team).
 
 Query:
 
 Case A)
 
 # show firewall filter PROTECT-FROM_IP_OPTION 
 
 term police-ip-options {
 
 from {
 
 ip-options any;
 
 }
 
 then {
 
 policer POLICE-IP_OPTIONS;
 
 count police-ip-options;
 
 }
 
 }
 
 term accept-all {
 
 then {
 
 count accept-all;
 
 accept;
 
 }
 
 }
 
 
 
 # show firewall policer POLICE-IP_OPTIONS 
 
 if-exceeding {
 
 bandwidth-limit 3m;
 
 burst-size-limit 320;
 
 }
 
 then discard;
 
 
 
 set routing-instances RED forwarding-options family inet filter  
 PROTECT-FROM_IP_OPTION
 
 set routing-instances BLUE forwarding-options family inet filter  
 PROTECT-FROM_IP_OPTION
 
 
 
 Will RED and BLUE share 3Mbps, or will each get own 3Mbps?
 
 
 
 
 
 Case B)
 
 
 
 ...amily vpls filter PROTECT-UNKNOWN_UNICAST
 
 
 term unknown_unicast {
 
 from {
 
 traffic-type unknown-unicast;
 
 }
 
 then {
 
 policer POLICE-UNKNOWN_UNICAST;
 
 accept;
 
 }
 
 }
 
 term accep {
 
 then accept;
 
 }
 
 
 
 show configuration firewall policer POLICE-UNKNOWN_UNICAST 
 
 if-exceeding {
 
 bandwidth-limit 42m;
 
 burst-size-limit 100k;
 
 }
 
 then discard;
 
 
 
 set routing-instances GREEN forwarding-options family vpls filter input 
 PROTECT-UNKNOWN_UNICAST
 
 set routing-instances YELLOW forwarding-options family vpls filter input 
 PROTECT-UNKNOWN_UNICAST
 
 
 
 Will GREEN, YELLOW share 42Mbps or get own 42Mbps policers?
 
 
 
 
 JTAC response
 
 Query: If you configure same FW with policer to multiple instances, what is 
 expected result? Should policer be shared or should it be dedicated per 
 instances?
 JTAC: It will be dedicated per instance. In your example RED and BLUE will 
 consume 3MB independently.
 ---
 
 
 
 
 But as per my own testing, I know IP-OPTIONS policer was shared in 10.4 (which
 is what I want for IP options). And VPLS policer I want not-shared, as in 
 11.4.
 
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] instance-specific filters for VPLS BUM/flood filtering

2012-11-06 Thread Christopher E. Brown

And I have tested and seen exactly the opposite with 10.4R10 in both
MX80 and all trio MX960.


Create a policer and a vpls filter that matches unknown ucast, bcast and
mcast.

Apply to VPLS forwarding table in 2 instances

...

Two filter instances, but one shared policer.


On 11/6/12 12:00 AM, Saku Ytti wrote:
 On (2012-11-05 10:44 +0100), Sebastian Wiesinger wrote:
 
 And there I'm missing a instance-specific knob. The only reference I
 find to instance-specific is this:

 | To enable this higher precedence on BPDU packets, an instance-specific
 | BPDU precedence filter named default_bpdu_filter is automatically
 | attached to the VPLS DMAC table
 
 Your questions is valid. For interfaces you need 'interface-specific' so
 that policers in filter are not shared.
 
 However for some reason, in VPLS routing-instance forwarding-options the
 filter policers are instance-specific, so if your policer says 5Mbps, each
 instance where it is applied gets 5Mbps. Tested in MX960 11.4R5. I don't
 quite understand why it is like this, or where it is documented.
 
 I would have expected they are shared. As if you use forwarding-options
 filters to limit IP options, and apply same filter to main instance and
 routing-instances, then the policer is shared, so 5Mbps IP options is
 shared amongst all routing-instances and global instance. Tested in 10.4R2
 or so.
 
 Does not seem consistent, and I fear if it is intended feature or not, as
 I've not found documentation.
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] instance-specific filters for VPLS BUM/flood filtering

2012-11-06 Thread Christopher E. Brown
On 11/6/12 3:43 AM, Sebastian Wiesinger wrote:
 * Christopher E. Brown chris.br...@acsalaska.net [2012-11-06 10:41]:

 And I have tested and seen exactly the opposite with 10.4R10 in both
 MX80 and all trio MX960.


 Create a policer and a vpls filter that matches unknown ucast, bcast and
 mcast.

 Apply to VPLS forwarding table in 2 instances

 ...

 Two filter instances, but one shared policer.
 
 Just to be sure, could you try to use the interface-specific keyword
 for your filter?

Yes, first thing will not commit.  Spent most of a day carefully going
through every available option, testing, re-testing doc searching, etc...


 I wonder if someone can clear this up. I think shared filters are more
 intuitive and in line with how normal interface filters work. But
 then I would need that instance-specific knob.
 
 At the moment I only see the solution to have individual flood
 filters for every VPLS instance which makes large-scale deployment
 complicated (I wanted to apply a standard filter in an apply-group).
 
 Regards
 
 Sebastian

Was totally blown away, and spent several hours chewing on our SE about it.

In my mind, the default is fine.  It is consistent with normal behavior
and there are times when a shared policer would be desired.  The lack of
a instance specific option though, that is stupid beyond belief,
shocking surprise.

I layed into SE team on this one just are hard as I did about tri-color
policers.  Juniper went through all the trouble to create a nice
tri-color system but left out the most useful feature already in
standard policers... logical-bandwidth




-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JUNIPER POLICER and CoS Shaping Rate

2012-10-08 Thread Christopher E. Brown
On 10/8/12 4:02 AM, Tima Maryin wrote:
 On 04.10.2012 14:30, Duane Grant wrote:
 
 
 network-services all-ethernet;
 
 
 Btw, it's unsupported thing.
 
 
 There are ways to do it per IFL, but I don't have an example handy.
 
 
 
 Last time i checked build-in ports of MX=80 did not support per unit 
 scheduling.

That is correct, xe-0/0/0 through xe-0/0/3 are port mode only.  It is
the MIC interfaces that have per-unit shaping.



-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Tricks for killing L2 loops in VPLS and STP BPDU-less situations?

2012-08-20 Thread Christopher E. Brown


One think I noticed when working with the BUM filter under VPLS instance
is that there is no way to declare a per instance policer that I could find.

Your can call the same filter/policer in multiple VPLS instances, but
the named policer is a single global instance.  So, if you call the same
filter w/ 5Mbit policer in 20 instances it is not 20 seperate 5 mbit
policers, it is one policer shared across all.


I very much want to add a BUM policer by default to all VPLS instances,
but I really want to avoid creating a seperate filter and policer config
for each instance, when 95% of them would be running one of three
standard configs.


And yes, this was tested.  10.4R10, trio based (MX960s w/ MPC2 and MX80)
the policer was always shared.


On 8/17/12 3:20 PM, Chris Kawchuk wrote:
 Hi Clarke,
 
 We pass through BPDUs through VPLS the MX'es- but yes, miscreant users / 
 switches will always be a problem.
 
 We do the following to every customer-facing VPLS instance, but only #3 would 
 help you here:
 
 1. Mac Limiting per VPLS Interface (100) (i.e per 'site')
 2. Mac Limiting per VPLS (500)
 3. Limit Broadcast/Unknown Unicast/Multicast Traffic (5 Mbit) into the VPLS
 
 You can put on an input firewall filter which calls a 5 Mbit policer at 
 [routing instances vpls-name forwarding-options family vpls ] to start 
 limiting this type of traffic into the 'bridge domain' at any time.
 
 - CK.
 
 
 On 18/08/2012, at 1:08 AM, Clarke Morledge chm...@wm.edu wrote:
 
 We have had the unfortunate experience of having users plug in small 
 mini-switches into our network that have the capability of filtering out 
 (by-default) BPDUs while allowing other traffic through.  The nightmare 
 situation is when a user plugs in such a switch accidentally into two of our 
 EX switches.  Traffic will loop through the miscreant switch between the two 
 EXs and without BPDUs it just looks like MAC addresses keep moving between 
 the real source and the two EXs.

 In an MX environment running VPLS, this problem can happen easily as there 
 are no BPDUs even to protect against loops in VPLS, particularly when your 
 VPLS domain ties into a Spanning Tree domain downstream where your potential 
 miscreant switch may appear.

 I am curious to know if anyone has come up with strategies to kill these 
 loops for EXs running Spanning Tree and/or MXs running VPLS. Rate-limiting 
 may help, but it doesn't kill loops completely.  I am looking for ways to 
 detect lots of MAC address moves (without polling for them) and blocking 
 those interfaces involved when those MAC moves exceed a certain threshold 
 via some trigger mechanism.

 Assume Junos 10.4R10 or more recent.

 Clarke Morledge
 College of William and Mary
 Information Technology - Network Engineering
 Jones Hall (Room 18)
 Williamsburg VA 23187
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 Power

2012-03-29 Thread Christopher E. Brown
On 3/29/12 9:05 AM, Kevin Wormington wrote:
 Curious if anyone has used one AC and one DC power supply in an MX80?  Yes, I 
 know the docs say it's not supported but we all know how that goes.
 
 Thanks,
 
 Kevin

The big issue is not will the device support it, but why devices
generally don't, and often actively prevent this.


The issue is potential differences between the systems, attempting to
equalize through your device, or a person.  This includes not just
normal operation, but failure and event (device fail, lightning, infra
damage) handling.


It is possible to design a combined AC/DC power system for this, but it
is not simple or easy, technical or regulatory wise.

It is possible to design isolated input power supplies for devices.

It is possible to design the devices to maintain approp internal
isolation to take advantage of this.

It is possible to do all of this at the same time, and in theory meet
all of the require safety regs.

Possibly != easy or cheap.  Most folks will accept the increase in
equipment cost.

Most folks will simple hook up the gear if it will allow it, without
regard to the special requirments for install or power systems build out.


That places the gear makers in an interesting position.

They could make the gear to do it, but it would cost more.

It would only be safe and not risk gear damage if it was coupled with
proper install and AC/DC power systems design.


I have seen plenty of devices running one AC and one DC supply, even
though it was always a DO NOT TO IT item for that device.  I have seen
more than a few damaged, and more than a few safety issues.

It is why many gear makers add additional restrictions, like gear
refusing to power up if both AC and DC supplies are inserted.

-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JunOS 10.4R8.5 on MX5? Am I forced to run 11.4+?

2012-03-25 Thread Christopher E. Brown


The _only_ issue is that the chassis ident is MX5-T/40-T etc...

The ID is not in the list so JunOS refuses to finish booting.


We had some part numbers swapped, asked for MX80-40 in late December and
received MX40-T, just had 7 of them swapped out.  (We are currently
running 10.4, no way in hell I am jumping to 11.2 or 11.4 at this time).


The interesting thing about this...  All of the support and sales folks
I talked to have been under the impression for the last 6+ months that
the new MXxx-T units were just a grey chassis version of the MX80-T
(modular MX80 w/ timing).  They all though it was just a part number
change for the re-badged/colored units.

I stopped yelling at them when they had to go to business unit
management to find out that yes, 11.2 and later only and no there would
not be a 10.4S release that added the new chassis IDs for those of us
caught in the middle.



On 3/22/12 6:19 AM, OBrien, Will wrote:
 I think it's a matter of the newer switching fabric only being supported in 
 11.
 
 Will O'Brien
 
 On Mar 22, 2012, at 8:12 AM, Per Granath per.gran...@gcc.com.cy wrote:
 
 I suspect the 10.4 would not lock down the XE ports on the chassis, so there 
 is a reason for not allowing it to work...


 It's quite weird, especially since I can upgrade the system to a full  
 MX80
 with licences only, and if I do that I expect that I will be able to run my
 standard release (10.4R8.5) which I run on my other MX80s (which is the full
 MX80 btw).

 So basically, I feel a bit cheated that I had to wait for MX5 to be 
 released
 when I could get a MX80-5G.

 I'm going to try to install 11.2R5.4 (recommended release) on it now, I do 
 not
 trust a R1-release. Ever. ;-) I might also try create a technical support 
 ticket or
 raise hell with my SR to try to make the 10.4-series to support MX5/10/40s
 unless someone else has done it already.


-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] J series COS limits

2012-02-16 Thread Christopher E. Brown


Is mapping multiple forwarding classes per queue even possible on a J series?


forwarding-class class and restricted queue don't even seem to be avail in the 
CLI but I
cannot find any docs that cover it one way or the other.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper firewall policer inner workings

2011-04-04 Thread Christopher E. Brown

iperf counts payload, not total L3.  The policer is counting total L3.


10Mbit bits/sec of total L3 is 9.8Mbit of payload in this case.


There is _always_ noise in the system.

Unless you are using a calibrated test device doing the work in hardware
(or well written software in a RealTime scheduling loop), consider all
flow values to be average over some small window.  Generally there is
some packing up and bursting of some level, even if the flow averages
10Mbit over a 500ms window, it may be running over/under on a 50ms window.

There is also granularity in the policer.  When it drops, it cannot drop
a fractional packet

9.77 v.s. 9.8 is dead on and as expected.


For example:  A single packet that is 10% over is 100% drop.  4 packets
a second that are 10% overlimit are are dropped account for the .77 v.s.
.8 right there.



On 4/4/11 4:48 AM, Martin T wrote:
 Gabriel,
 the question is, what does JUNOS bandwidth-limit 10m count into this
 10m. Only they application payload? L4 header as well? Or even the
 IP header? If I send UDP traffic with 10Mbps and router policer
 bandwidth-limit 10m drops ~2.5% of this traffic, then I don't find
 this normal..
 
 
 regards,
 martin
 
 
 2011/4/4 Gabriel Blanchard g...@teksavvy.ca:
 The policer is dropping packets in order to slow down your connection to 
 10mbps. In my opinion this is working perfectly.



 On 2011-04-04, at 8:30 AM, Martin T m4rtn...@gmail.com wrote:

 With such configuration: 
 http://img135.imageshack.us/img135/3162/iperftest.png

 ..there is still packet loss present:

 [root@ ~]# iperf -c 192.168.2.1 -u -fm -t60 -d -b 10m
 
 Server listening on UDP port 5001
 Receiving 1470 byte datagrams
 UDP buffer size: 0.04 MByte (default)
 
 
 Client connecting to 192.168.2.1, UDP port 5001
 Sending 1470 byte datagrams
 UDP buffer size: 0.01 MByte (default)
 
 [  4] local 192.168.1.1 port 59580 connected with 192.168.2.1 port 5001
 [  3] local 192.168.1.1 port 5001 connected with 192.168.2.1 port 44050
 [ ID] Interval   Transfer Bandwidth
 [  4]  0.0-60.0 sec  71.5 MBytes  10.0 Mbits/sec
 [  4] Sent 51019 datagrams
 [  3]  0.0-60.0 sec  69.9 MBytes  9.77 Mbits/sec   0.022 ms 1180/51021 
 (2.3%)
 [  3]  0.0-60.0 sec  1 datagrams received out-of-order
 [  4] Server Report:
 [  4]  0.0-60.0 sec  69.9 MBytes  9.77 Mbits/sec   0.150 ms 1180/51018 
 (2.3%)
 [  4]  0.0-60.0 sec  1 datagrams received out-of-order
 [root@ ~]#


 What should this filter-specific do? According to documentation,
 Filter-specific policers allow you to configure policers and counters
 for a specific filter name. If the filter-specific statement is not
 configured, then the policer defaults to a term-specific policer, but
 it's rather difficult to understand without examples..

 regards,
 martin


 2011/4/4 Ben Dale bd...@comlinx.com.au:
 Hi Martin,

 Your policer bandwidth (10Mbps) is being counted on both ingress and 
 egress and will be stacking - try adding:

 set firewall policer bw-10Mbps filter-specific

 and you should see the loss go away.

 Cheers,

 Ben

 On 04/04/2011, at 7:41 PM, Martin T wrote:

 I made a following setup:

 http://img690.imageshack.us/img690/3162/iperftest.png

 In a laptop, an Iperf server is listening like this: iperf -s -u -fm.
 In a workstation, an Iperf client is executed like this: iperf -c
 192.168.2.1 -u -fm -t60 -d -b 10m. This will execute simultaneous
 10Mbps UDP traffic flood between 192.168.1.1 and 192.168.2.1 for 1
 minute. Results are always like this:

 [root@ ~]# iperf -c 192.168.2.1 -u -fm -t60 -d -b 10m
 
 Server listening on UDP port 5001
 Receiving 1470 byte datagrams
 UDP buffer size: 0.04 MByte (default)
 
 
 Client connecting to 192.168.2.1, UDP port 5001
 Sending 1470 byte datagrams
 UDP buffer size: 0.01 MByte (default)
 
 [  4] local 192.168.1.1 port 32284 connected with 192.168.2.1 port 5001
 [  3] local 192.168.1.1 port 5001 connected with 192.168.2.1 port 52428
 [ ID] Interval   Transfer Bandwidth
 [  4]  0.0-60.0 sec  71.5 MBytes  10.0 Mbits/sec
 [  4] Sent 51021 datagrams
 [  4] Server Report:
 [  4]  0.0-59.9 sec  69.8 MBytes  9.77 Mbits/sec   0.112 ms 1259/51020 
 (2.5%)
 [  4]  0.0-59.9 sec  1 datagrams received out-of-order
 [  3]  0.0-60.0 sec  69.8 MBytes  9.77 Mbits/sec   0.030 ms 1200/51021 
 (2.4%)
 [  3]  0.0-60.0 sec  1 datagrams received out-of-order
 [root@ ~]#

 As you can see, there is a ~2.5% packet loss. This packet loss is due
 to the fact, that router bw-10Mbps policer drops small percentage of
 packages in 

Re: [j-nsp] J series users bitten by the massive memory useincrease with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/20/10 10:54 PM, Leigh Porter wrote:
 
 I thought that as soon as you turn MPLS on the flow mode was diabled and
 you were back to good old packet mode?
 
 --
 Leigh

Is puts things in packet mode, but all of the memory pre-allocs to
support flow mode remain in play.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] J series users bitten by the massive memory useincrease with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/21/2010 12:48 PM, Heath Jones wrote:
 I think you should actually give the renaming of the binary a go. If you
 rename flowd (or name of process using memory), it wont be found and
 loaded on next boot. Obviously this is a hack and not what you want to
 be relying on in a production network, but if it solves the issue then
 good. That and hassle Juniper about a longer term solution.
  
 The other solution is to remove the statement that causes the daemon to
 load on boot, but I cant remember where that is and what loads it (init
 / rc?).
  
 Killing the process first will let you check if there are any other side
 effects.


The process is required for forwarding.  It can be disabled in the config, but 
then all
routing stops.



-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] J series users bitten by the massive memory useincrease with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/21/2010 12:34 PM, Smith W. Stacy wrote:
 On Jul 21, 2010, at 12:35 PM, Christopher E. Brown wrote:
 On 7/20/10 10:54 PM, Leigh Porter wrote:
 
 I thought that as soon as you turn MPLS on the flow mode was diabled and 
 you were
 back to good old packet mode?
 
 -- Leigh
 
 Is puts things in packet mode, but all of the memory pre-allocs to support 
 flow mode
 remain in play.
 
 The J-series software has always pre-allocated memory to the forwarding 
 daemon. This
 was always the case, even in 7.x and 8.x software that only supported 
 packet-mode. The
 fact that memory is still pre-allocated when you disable flow mode is not 
 surprising.
 
 Simply looking at the memory usage with show commands doesn't tell the 
 complete
 story. The real question is can the router handle a larger routing and 
 forwarding table
 once flow mode is disabled. If the answer is no, then I would agree that 
 your request
 is a legitimate feature enhancement.
 
 --Stacy


The issue is not that memory is being pre-allocated to the forwarding / flow 
process.
This is expected and required to function.



The issue is that when things switched to flow support the memory usage went 
*way* up, and
even when you convert to packet mode it is not reduced.


If adding flow support causes the forwarding daemon to grow a couple hundred 
megs than
going back to packet should free most of that memory.


A 1GB J that used to run with at least a few hundred megs available now has  
40 megs free...

-- 

Christopher E. Brown   chris.br...@acsalaska.net   desk (907) 550-8393
 cell (907) 632-8492
IP Engineer - ACS

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] J series users bitten by the massive memory useincrease with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/21/2010 6:09 AM, Jay Hanke wrote:
 After implementing the procedure did you see a drop in memory utilization?
 If so, how much?
 
 jay


No reduction *AT ALL*, that is the issue.


Turning off flow mode does not free the pre-alloced memory used to support flow 
functions.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] J series users bitten by the massive memory useincrease with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/21/2010 1:23 PM, Heath Jones wrote:
 Chris - Sorry I didnt realise the process had changed names and we are
 actually talking about the forwarding process itself. In that case, the
 only other thing I can think of right now is:
 When the forwarding process starts, it allocates the 400Mb+ for these
 tables. The question is if the forwarding process is making a decision
 based on the configuration *before* the point of memory allocation as to
 if the allocation is required.
  
 This is what you need to know from Juniper engineers / dev team. It
 probably wasn't written that way, and if not it makes searching for
 configuration statements to achieve the goal pointless!!
 (It's highly unlikely that they coded deallocation functions for those
 structs. Much simpler to just restart a process..)
  
 Please let me know how you go with this - its an interesting problem!



I have disabled and then re-started the process on a live router w/ packet mode 
config, no
change in use.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] J series users bitten by the massive memory useincrease with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/21/2010 2:28 PM, Nilesh Khambal wrote:
 I am not a J-Series person and don't know much about flowd operation but
 does the memory utilization come down when you reboot the router after
 disabling the flow mode?
 
 How does the flowd memory stats looks like in show system processes
 extensive before and after disabling the flowd?
 

No, if it did I would not consider it an issue.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] J series users bitten by the massive memory use increase with flow mode add, please file jtac cases.

2010-07-21 Thread Christopher E. Brown
On 7/21/2010 3:47 PM, Heath Jones wrote:
 Chris - Is the current situation: that Juniper have said there is no
 workaround / configuration change that can be made to stop the
 allocation of memory for the flow forwarding information?


The current response is that this memory consumption is by design and 
therefor not a bug.


The fact that this makes a J series purchased with the MAX avail memory less 
then 6 months
ago almost out of memory from day 1 does not seem to be getting through.


I keep hammering this point with the SE on the case and am pushing the issue as 
silly
design flaw, fix it through my account team as well.  I hope everyone else 
does the same.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] J series users bitten by the massive memory use increase with flow mode add, please file jtac cases.

2010-07-20 Thread Christopher E. Brown


I know alot of us here have been bitten by this, and the fact that disabling 
flow mode and
reverting to packet does not free up any of the ~ 460MB or so being eaten by 
fwdd/flowd is
insane.


I am currently having the This is a design feature, the pre-alloc is planned 
argument
with a SE.


I have no issue with flow features being added, looks great for branch office 
use.


What is killing be is that flow is not wanted, needed or usable in a 
small-infra use with
multipath and MPLS services, but even disabled flowd is eating half the box 
memory.


52% memory usable at bootup (bare config) is not sane, and a 1GB 6350 used to 
be able to
handle a few thousand igp routed plus a full table with easy and lots of 
headroom, now it
is at 92%




I am pushing this w/ account team and support case, please do the same.  If 
enough people
complain about the insane resource consumption they might actually fix it.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp