Re: [j-nsp] CoS buffer size

2015-07-03 Thread Adam Vitkovsky
Hi Dan, Saku, Marcin,

My understanding was that you might be able to oversubscribe only using PIR 
(need to test).
And in that case all the queues are in the excess region.
So only the excess priorities are honoured (HI and LO in strict priority 
fashion) and queues with the same priority are serviced round robin.

I also thought that weight with which the queues in excess region are served is 
proportional to the transmit-rate (as percentage of VLAN PIR).
Though looking at the show outputs and reading your test results it looks like 
it's not the case.
But have you tried setting excess-rate for the queues - that should be honoured 
while a given queue is in excess region right?

I just can't believe that once in excess region the queues (using %) are using 
main interface PIR -is it a bug please?

With regards to buffers
May I oversubscribe buffers by configuring say delay-buffer-rate percent 10 
for 11 VLANs on a physical interface please?



adam

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] DHCPv6-PD server Access-Internal routes on Branch SRX

2015-07-03 Thread Hugo Slabbert
I'm not getting any responses on the Juniper forums, but am hoping this 
list may have some answers.


I'm labbing up a branch SRX as a DHCPv6 PD server as managed CPE for 
customer sites.  A /48 is routed to the SRX, and the SRX in turn would dish 
that out to a customer device via PD.  Our ideal deployment would be to 
just do PD with link-local only on the touchdown (i.e. no SLAAC, NDRA, or 
ia-na).


DHCPv6 PD works fine and the customer equipment gets the prefix  can set 
up a ::/0 route via RAs from the SRX.  The problem is that if the SRX's 
touchdown interface to the customer device has LL only, it doesn't install 
an Access-Internal route for the delegated prefix, and so the customer's PD 
prefix is unreachable.


If I add a GUA or ULA on the SRX's touchdown interface to the customer 
equipment and add that /64 under interface touchdown prefix stanza 
under router-advertisement, the access-internal route gets installed 
properly on the SRX when the customer dhcpv6 client gets its PD lease.


Is this expected behaviour?  Is running ia-pd with link-local not an 
accepted deployment model?  I flipped around the roles in the lab with a 
Cisco 867 acting as the PD server and the SRX100 as a client, and IOS is 
happy to install a route for the PD prefix with link-local only on the 
touchdown.


Test gear was an SRX110H2-VA. The behaviour was the same on all of the 
following:


- 12.1X44-D45.2
- 12.1X46-D35.1
- 12.1X47-D20.7
- 12.3X48-D10.3

--
Hugo

h...@slabnet.com: email, xmpp/jabber
PGP fingerprint (B178313E):
CF18 15FA 9FE4 0CD1 2319
1D77 9AB1 0FFD B178 313E

(also on textsecure  redphone)



signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] L2 flooding

2015-07-03 Thread Johan Borch
Hi!

I have a L2 network with a core and a bunch of access switches, all juniper
EX. The links between the access switches and the core are marked with vlan
all and trunk.

Here's to my problem. I'm deploying a CEPH cluster and the cluster is
running on it's own vlan. This vlan only exists on the core and on one
other switch. The problem is that CEPH is very chatty and due the fact that
the core links accepts all vlan the core floods the traffic to all other
switches and fills upp the uplinks (even if the vlan don't exists on those
switches). Is there some way to mitigate this except the obvious to not run
vlan all on the uplinks and to move ceph-cluster to it's own switches?

Johan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] VPN over ADSL With 4G Backup

2015-07-03 Thread Hugo Slabbert

Sorry for the long delay in replies.


We will have a non RFC1918 IP address at the hub and the spokes will get a
dynamic IP from the provider through ADSl 2+.


I haven't had to deal with dynamic IPs on SRX ipsec tunnel endpoints as 
I've been fortunate that we can maintain enough control of the links to 
require statics.  That said, I *believe* this should just change your IKE 
gateway configs on the hub to reference a dynamic gateway for each customer 
site rather than using a static destination gateway IP, e.g.:


security {
ike {
gateway spoke1 {
ike-policy spoke1-policy;
dynamic hostname spoke1.example.org;
external-interface ike-ext-interface;
}
}
}

Be sure to use aggressive mode in your IKE policy.


the spokes should have a 4G as backup  for the ADSL2+.

How the backup link should be configured.

I assume at the hub st0.x multipoint will be configured.


There are a few different ways to slice it.  Multipoint at the hub is one 
option.  I haven't run a multiple routed IPSEC setup on Junos, so I'm 
extrapolating a bit here and hopefully somebody will tell me I'm being an 
idiot if I veer to far off course.


If you're doing backup links, running a protocol, I would set up 2x 
multipoint VPN interfaces at the hub, banked off of different IPs (could be 
the same external interface with multiple IPs bound; use local-address 
a.b.c.d and local-identity inet a.b.c.d under the IKE gateway 
definitions on the hub to distinguish the two).  Point the primary link 
from the branches to the first multipoint st0.x interface at the hub, and 
the secondary branch links at the second multipoint st0.x interface at the 
hub.  Set your protocol interface metrics/costs so that the second 
multipoint st0.x  at the hub has a higher cost.  If you were to use just 
one multipoint st0.x at the hub, the hub would not have a way to 
distinguish route preferences between the primary and secondary links.


In terms of backup paths / failover:
Will you route *all* spoke site traffic through the hub?  Or just 
inter-site traffic, with e.g. regular public internet traffic going out the 
spoke's local provider's gateway?


If the former:
Create static /32 routes for the hub's IKE gateway IPs for the primary and 
secondary st0.x multipoint interfaces there (I'll just call them st0.0
(primary) and st0.1 (secondary) from here on).  The /32 route for st0.0's 
IKE gateway IP should go via your default gateway on the ADSL interface, 
with /32 route for st0.1's IKE gateway IP via the HSPA backup default 
gateway.  Actually; given that we're talking about DHCP on the ADSL, 
consider putting the ADSL and HSPA interfaces in their own discrete 
virtual-router routing-instances so that the 0/0 route picked up from DHCP 
on the ADSL gets installed in that VR, and the static 0/0 route for the 
HSPA can be isolated into its own VR.


Failover between primary and secondary are then handled by whatever 
protocol you run within the st0.x tunnels. 

If the latter (VPN tunnels for inter-site traffic only; public internet 
traffic egress locally at the branches), you'll still want static routes 
config'd on the branches for the 2x different IKE gateway IPs on the hub, 
but now you also need to handle failover locally.  My guess is your best 
bet for that would be RPM to monitor connectivity across your ADSL 
connection and pull that route in case of RPM failure.  I haven't done that 
either on a DHCP setup, so YMMV on the details of that implementation.


Hope that helps; I'd be curious to hear how this turns out.

--
Hugo

h...@slabnet.com: email, xmpp/jabber
PGP fingerprint (B178313E):
CF18 15FA 9FE4 0CD1 2319
1D77 9AB1 0FFD B178 313E

(also on textsecure  redphone)

On Sat 2015-Jun-13 11:39:11 +0300, Nc Aji aji14...@gmail.com wrote:


Appreciated your inputs.

To make it bit more clear.

We will have a non RFC1918 IP address at the hub and the spokes will get a
dynamic IP from the provider through ADSl 2+.

the spokes should have a 4G as backup  for the ADSL2+.

How the backup link should be configured.

I assume at the hub st0.x multipoint will be configured.

do you have any suggestions regarding the configurations.

Thx



signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] L2 flooding

2015-07-03 Thread Hugo Slabbert
On Fri 2015-Jul-03 20:42:36 +0200, Johan Borch johan.bo...@gmail.com 
wrote:



Hi!

I have a L2 network with a core and a bunch of access switches, all juniper
EX. The links between the access switches and the core are marked with vlan
all and trunk.

Here's to my problem. I'm deploying a CEPH cluster and the cluster is
running on it's own vlan. This vlan only exists on the core and on one
other switch. The problem is that CEPH is very chatty and due the fact that
the core links accepts all vlan the core floods the traffic to all other
switches and fills upp the uplinks (even if the vlan don't exists on those
switches). Is there some way to mitigate this except the obvious to not run
vlan all on the uplinks and to move ceph-cluster to it's own switches?


mvrp?



Johan


--
Hugo

h...@slabnet.com: email, xmpp/jabber
PGP fingerprint (B178313E):
CF18 15FA 9FE4 0CD1 2319
1D77 9AB1 0FFD B178 313E


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] CoS buffer size

2015-07-03 Thread Dan Peachey
Hi Adam,

My understanding was that you might be able to oversubscribe only using PIR
 (need to test).
 And in that case all the queues are in the excess region.
 So only the excess priorities are honoured (HI and LO in strict priority
 fashion) and queues with the same priority are serviced round robin.


You can oversubscribe both PIR and CIR (G-rate) so you have to be careful
how much G-rate you allocate (if you are selling a PIR/CIR service that
is). In H-CoS mode with only PIR set all queues are in excess even if the
aggregate of all the shapers does not oversubscribe the interface bandwidth
(or aggregate shaper). In per-unit mode with only PIR set, you have to
oversubscribe the shapers to end up with all queues in excess.


 I also thought that weight with which the queues in excess region are
 served is proportional to the transmit-rate (as percentage of VLAN PIR).

Though looking at the show outputs and reading your test results it looks
 like it's not the case.


Well, the weights are determined from the transmit-rates but the queues
aren't proportioned in the way you'd expect them to be relative to the
transmit-rates. For example, you'll find that you can get packet loss in a
queue that is sending less than contracted rate if you oversubscribe
another queue at the same priority level. The weightings don't translate to
an exact percentage of bandwidth in reality.


 But have you tried setting excess-rate for the queues - that should be
 honoured while a given queue is in excess region right?


Yep, but if you set excess-rate = transmit-rate then the weights are just
the same as if you hadn't set them and it doesn't affect the behaviour of
the queues.


 I just can't believe that once in excess region the queues (using %) are
 using main interface PIR -is it a bug please?


It's more a case that per-queue guaranteed rates are set to zero when you
are in H-CoS or oversubscribed per-unit mode which means that you have to
factor in G-rate for each node when it would be nice not to have to bother
(and you can not bother, but things don't work quite the way you expect
them to).


 With regards to buffers
 May I oversubscribe buffers by configuring say delay-buffer-rate percent
 10 for 11 VLANs on a physical interface please?


Not sure on that one, never tried to be honest.

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPLS Endpoint Discussion

2015-07-03 Thread Alexander Arseniev



On 03/07/2015 01:45, Ben Dale wrote:
Always use loopbacks - if the link goes down (or the preceding node), 
the destination of the LSP goes with it - Junos will not maintain 
prefixes for downed interfaces. You mention this being a ring - if you 
target the LSP to a loopback, your IGP will provide an alternative 
path after a failure. 
Sometimes it is a feature - more than once I have come across a request 
for if it fails, it fails type of service meaning cheap nonresilient 
connection.
And LSPs destined to link IPs and coupled with strict EROs fit such 
requirement nicely, but I digress...

HTH
Thanks
Alex
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp