Re: [j-nsp] Packet mode mpls (was Layer 2 feature on srx)

2012-04-10 Thread Pavel Lunin
Phil Mayers wrote:

On 04/10/2012 06:17 AM, Doug Hanks wrote:

 In the context of packet-mode, the family mpls is analogous to inet.  This
 is correct.


 Not sure I understand this.

 analogous implies what, here? That enabling packet-mode for MPLS
 implicitly enables it for IPv4?


Yep. Same for ISO.



 If that's the case, why is there also a family inet option?


Looks like first they meant to have different options for different
families allowing to simultaneously have some in flow and some in packet
mode. But it's already about… 4 years passed, I think, and it's still this
way. Any family turned into packet mode turns the whole box. Sort of the
same stuff as 'say per-packet mean per-flow' LB.

There might be a difference for inet6 (I am not sure) since some version,
in which the stateful processing for IPv6 was added. I just remember I
needed to explicitly set the packet mode for it playing around something
implied IPv6, otherwise it didn't work (or rather worked in flow-mode but I
had no zones/policies). Must repeat, I'm not sure.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Layer 2 feature on srx

2012-04-10 Thread Pavel Lunin
4/10/2012 Doug Hanks wrote:


 I suggest that the OP use set vlan name instead of set bridge-domain
 name  Also use set interfaces vlan instead of set interfaces irb

 I'm not even sure why the SRX accepted this configuration.


The MX-style L2 commands are supported on SRX (branch as well and yes) and
looks like this is for transparent flow mode. With some limitations though.
Say you can't enable flexible-ethernet-services on FE interfaces, only on
GE. Enabling flexible-vlan-taggin gives you the following warning: Only
compatible with vpls vlan encapsulations (I haven't ever seen anything
like this on MX), etc.

So as far as I understand

MX-style L2 config:

— for transparent flow mode,
— IRB interfaces are for management traffic only,
— switching performed in software for branch and I wonder where on Hi-End,
— have no idea what happens to it in packet-mode.

EX-style L2 config:

— for general switching,
— performed by broadcom chips,
— have consequent limitation applied to ports, between which traffic can be
switched (same PIM),
— RVI interfaces can really route traffic (software based both in packet
and flow modes).

More details:
http://www.juniper.net/techpubs/en_US/junos11.4/information-products/topic-collections/security/software-all/layer-2/index.html

Would be pleasant If someone can sort out points I marked with I'm not
sure/have no idea. Or, if I'm totally wrong somewhere, please tell me.

--
Regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Layer 2 feature on srx

2012-04-11 Thread Pavel Lunin

10.04.2012 20:13, Michael Still wrote:
 OP wanted to use the IRB ints as next hop for their respective
 networks. This is apparently not supported on the SRX platform in
 transparent mode: 
Yeah, I mentioned this as well. In my post I just wanted to explain why
these (MX-style L2) commands were successfully committed by SRX (which
is really not obvious and I even think, someone who decided to reuse
this part config for this purpose on SRX needed to think twice or at
least document it more clearly. This is not the first time I see such a
confusion).
 In this release, the IRB interface on the SRX Series device does not
 support traffic forwarding or routing. In transparent mode, packets
 arriving on a Layer 2 interface that are destined for the device’s MAC
 address are classified as Layer 3 traffic while packets that are not
 destined for the device’s MAC address are classified as Layer 2
 traffic. Packets destined for the device’s MAC address are sent to the
 IRB interface. Packets from the device’s routing engine are sent out
 the IRB interface. So in transparent / IRB mode the IRB int can only
 be used as a management interface. OP needs to do is MX testing using
 an MX device.
IICR, by now they can't even terminate IPSec on IRB. An interesting
question here is whether they are going to make IRBs to be real L3
ifaces. What a mess it would be :)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CGN ob MX5?

2012-04-14 Thread Pavel Lunin
Hi,

Until Juniper realizes MS-MIC (I have no idea when it will happen) MX5–80
boxes really supports no NAT at all.

What they call Inline NAT on Trio (recently realized) is by now… umm… sort
of a patch for a particular customer or something like. It only supports
1:1 bidirectional static mapping, which in no way has any relation to CGN.
If you take the license price into account, you'll understand what my
umm…  really means.

AFAIK, the idea behind real inline NAT (not just static mapping) on Trio is
using the embedded TCAM memory. Something like what NetScreen/ISG firewalls
did. This approach processes first packets though the 'long cycle' in
software and than offloads the session state to TCAM, though which the
subsequent packets are switched.

Two challenges arise here:

1. The need for a flexible and powerful CPU for 'long cycle' processing.
I'm afraid, the LU-chip inside Trio is not the best thing here.
2. TCAM update speed bottleneck.

If you take a look at the new session per second rate of any TCAM-based
flow-device, you'll see it's quite not much in the context of CGN.

However, as of what I know, Juniper mobile team, which develops GGSN from
MX, is going this way and they even have special MPCs with extended TCAM.
In mobile world, though, where session lengths are usually longer on
account of the lower access rates and, consequently, the new sps rates a
also lower, theoretically, this can be a solution.

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Firewall best practices

2012-06-13 Thread Pavel Lunin


 I have a question regarding managing policies among multiple sets of
 firewalls. I don't know what industry standard / best practice is for
 managing rules among multiple devices.

My two cents.

When there is really no such a standard, things to keep in mind do exist.

Here are some mistakes people tend to make (and I managed to recall):

1. Too many firewalls. The old-school (and, let's be honest, promoted by
vendors and their partners) security design style recommends to put a
separate firewall for everything. Dedicated internal firewall, dedicated
external firewall, dedicated VPN aggregator, dedicated this, dedicated
that. Just open any shmeeseeaee-security book, and you'll find lots of
the figures like that. Sometimes they looks attractive in PowerPoint but
it is unmanageable. Rule of thumb: the fewer statefull devices you have
in the network (including virtual ones), the more secure you are. Modern
firewalls have enough horse-power for almost everything.

2. Putting each VLAN into a dedicated zone. I believe it's the habit of
attaching acl/filters to interfaces, that makes people do this. This
mistake implies a lot of zones and complete mess in the policies, which
is not what security zones were invented for. The lower number of
security zones you pack your interfaces into, the closer you are to the
'Can you mother read it?' rule of thumb for policies. Some logic is
required though. Don't get me wrong, I don't mean to put everything into
a single zone (as well as I don't mean to have no firewall at all).
Normally you put something into a separate zone because you want to
isolate it _by default_ (!) from the other zones.

3. But not otherwise! Putting few interfaces into a zone does not mean
all traffic is permitted between these interfaces. On SRX you even have
to write explicit policies to allow intra-zone traffic. So it's usually
OK to put all your DMZ vlans into a single zone and allow only, say,
any to dns/ntp/kerberos using application dns/ntp/kerberos inside it.

4. Putting all IPSec VPNs into a single zone. This makes the VPNs
indistinguishable from security point of view. If the remote side is a
branch with users of just same trustiness as your local LAN users, it's
probably better to put the tunnel interface into the
Trust/LAN/however-you-wish-to-call-it zone. if you have a few tunnel to
partners (very often mixed with leased channels), put this all into a
special zone 'Partners', but don't put there your tunnels to branches.
So on.

5. Items 1 and 4 also mean that dedicated device for IPsec is a
particularly bad idea.

6. Complicated routing schemes, especially with FBF/PBR, static /32s and
other trash. This is hard to scale in the plain routing sense, but when
it comes to security, all the design beauty can be broken with a single
clumsy step. Add here the fact that security teams are often not experts
in complex routing.However business needs sometimes require this anyway.
But when your boss asks to extract particular flow and forward it via
Paris when it should normally go through Tokyo together with the rest of
traffic, tell him the real cost of the solution and ask to seriously
think of the benefits and losses. If the trick is really needed, try to
use only routing with no additional security constructions.

7. Particularizing policies with very-specific address-objects. Also
very effective way to turn your policies into a real nightmare. To have
a well-designed network security, first you need well-designed network.
Well-organized VLANs, addressing, 802.1X --- all this makes your
firewall config much smoother. In the ideal case you should be able to
use 'any'or subnet-based address objects for source in all policies, and
/32-objects only as destinations in policies permitting connections to
your servers.

P. S. And yes, security zone names are not worth much. Trust/untrust/dmz
or lan/wan/man/internet/DC/server-farm --- all the same. Better pay more
attention to your address-book entry names and interface descriptions.


--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IPv6 static default route in routing instance?

2012-06-13 Thread Pavel Lunin

My guess is that the direct route to your next-hop :a500:0:2::1 is
not in this instance. Check the interface address config for
ge-1/1/0.503 and ge-1/1/0.504.

13.06.2012 09:48, Gordon Smith wrote:
 Hi,

 Just wondering if anybody's come across this before - default IPv6
 static not appearing in the routing instance inet6 table...

 Instance is a VRF:

 instance-type vrf;
 interface ge-1/1/0.503;
 interface ge-1/1/0.504;
 route-distinguisher 56263:101;
 vrf-import [ reject-all ];
 vrf-export [ reject-all ];
 vrf-table-label;
 routing-options {
 graceful-restart;
 rib dmz.inet6.0 {
 static {
 route ::/0 next-hop :a500:0:2::1;
 }
 }
 static {
 route 0.0.0.0/0 {
 next-hop xxx.x.216.54;
 no-readvertise;
 }
 }
 }


 Looking at the dmz.inet6.0 table shows directly connected routes, but
 not the default.
 In contrast, dmz.inet.0 has a v4 default as expected.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX hardware acceleration caveats

2012-06-18 Thread Pavel Lunin

18.06.2012 14:47, Phil Mayers wrote:
 This is interesting. I was not aware of the EZchip offload in the
 high-end stuff.
As of the docs, it seems like the only advantage is reduced latency:
http://www.juniper.net/techpubs/en_US/junos12.1/information-products/topic-collections/security/software-all/security/index.html
 Can anyone comment on the expected (or observed) differences in
 latency  jitter when using EZchip offload? Does it lead to
 theoretically higher throughput (bytes or packets)?
It should at least give lower latency because of less number of internal
hops. But I really wonder how (if) it affects the new session setup rate
and concurrent session scaling. I really doubt, a single EZchip is
capable to handle states of millions of sessions, all of which can
easily pass a single 10G interface (and, consequently, NPC).

To be honest, by now it looks like a feature for a particular customer.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX hardware acceleration caveats

2012-06-18 Thread Pavel Lunin

18.06.2012 15:22, Pavel Lunin wrote:
 easily pass a single 10G interface (and, consequently, NPC).
NPU, to be precise.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX hardware acceleration caveats

2012-06-18 Thread Pavel Lunin

18.06.2012 16:31, Phil Mayers wrote:
 However, the docs suggest that it might be possible to enable offload
 on a per-policy basis.

 This might be good for certain latency-sensitive flows, if true.

Yep, as I understood, it's only per-policy based (please correct, if I'm
not right), which is, I really believe, because of the scaling limits.
Good news is that the license is _really_ cheap, at least by now :)

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX hardware acceleration caveats

2012-06-18 Thread Pavel Lunin
2012/6/19 Benny Amorsen benny+use...@amorsen.dk


  To be honest, by now it looks like a feature for a particular customer.

 To me it seems like a feature for every customer... Without offloading,
 how would you do stateful firewalling at 10Gbps+?


Em… isn't 10G+ possible on SRX HE without offloading?

Well, don't get me wrong. I nowhere said the offloading idea itself is bad
of whatever. But yes, I said it has challenges when it comes to scaling (we
saw it in the past), and I really don't know whether it can be solved these
days by commercially reasonable means, or maybe the current semi-software
(MS-PIC/SPU-like) approach is still easier and chipper. Again, I'm not
saying I know the answer and the second way is better.  But if any vendor
announces an offload-based stateful device able to do tens and hundreed of
gigs, first thing I want to ask is what about scaling?. And by now I
don't see much answer for this in the context of SRX-HE. In addition I see
lots of other quite serious limitations.

And what I mean with a feature for a particular customer is not a
feature, which other people don't need, but a feature that was implemented
for a key-customer, who can use it with current limitations. If it were at
least Trio LU, I'd more optimistic, but investing into the EZChip-based
offload really seems like a temporary solution. Well, IMHO.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX hardware acceleration caveats

2012-06-19 Thread Pavel Lunin


19.06.2012 03:25, Benny Amorsen wrote:
 Em… isn't 10G+ possible on SRX HE without offloading?
 I don't know, that is part of what I am trying to find out :)

Even 'independent tests' from Cisco's friends do not argue that SRX3k
can do 20G+.
http://www.cisco.com/en/US/prod/collateral/vpndevc/miercom_vs_juniper.pdf

I am sorry for that sort of a link in such a respectful place :)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Whats the best way to announce an IP range in BGP? Doesn't physically exist anywhere.

2012-06-22 Thread Pavel Lunin


 I have a /24 I want to announce, but I don't actually have it anywhere on
 the network. I NAT some of its IP's on the SRX that has the BGP session
 with our providers.
Static discard is really the best way. Aggregate/generate routes are
also theoretically possible, but if you are not sure you really need
some sort of external dynamism, it's better to nail it down with static
— less chances to have your routes damped somewhere after an internal
link flap.

 I've been using static routes with the discard flag, but I don't really
 like the way the SRX handles traffic. It still creates sessions for traffic
 destined to IP's not used anywhere (hitting the static route) and can be
 easily dos'd because of this.
I saw such a thing a couple of times and it was not because SRX handles
traffic wrongly, but due to some sort of misconfiguration. If a packet
fell under the discard route, the session would not be created
(otherwise you caught a real showstopper bug, but I don't much believe
in this, to be honest). Moreover, in order to have a session established
you also need a security policy permit.

1. Check the route where the packets actually fall under with show
security flow session session-identifier . Very likely that your
packets actually fall under a longer specific route. Say, automatically
generated for proxy-arp or something like.

2. Check the zones, from and to which the sessions are established,
which policy permits the traffic. As of what you describe, policies
should block such traffic anyway.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Whats the best way to announce an IP range in BGP? Doesn't physically exist anywhere.

2012-06-25 Thread Pavel Lunin


 This is exactly what happened. The session table filled up. One of
 our security guys took down our edge 650 cluster from a single unix
 box out on the net.
 This is what happens when you use a stateful box for an internet router.

 a  router with a covering aggreate and some knowledge of the more
 specifc on the interior would inexpensively discard traffic bound for
 unreachable destinations.

1. First, sorry for writing this once again, but it's just not the case.
Any more or less smart stateful device, whether SRX or anything else,
must not create session states for packets falling under a discard
route. And SRX does not, I checked. Filling up the session table is
caused by either a bug or (rather) a design/config mistake.

2. There is nothing wrong in the idea of using a firewall as a single
border device for both stateful-processing and ASBR in most networks
where a statful edge-device is needed and all external links are
terminated in a single site.

3. This particular problem has nothing to do with BGP or any other
routing. Even if Morgan had an MX at the edge and SRX behind it, this
same thing would happen as well. He would use, say, /27 as a source NAT
pool on SRX and announce it to the MX via IGP or iBGP, using actually
the same means to create the route itself (static discard, aggregate,
whatever). What will happen if an external host initiates a session
towards an IP in this prefix? If the firewall creates sessions for such
packets, what the firewall is needed at all?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Whats the best way to announce an IP range in BGP? Doesn't physically exist anywhere.

2012-06-26 Thread Pavel Lunin

25.06.2012 16:06, Scott T. Cameron:

 1. First, sorry for writing this once again, but it's just not the
 case.
 Any more or less smart stateful device, whether SRX or anything else,
 must not create session states for packets falling under a discard
 route. And SRX does not, I checked. Filling up the session table is
 caused by either a bug or (rather) a design/config mistake.



 I'm not sure I agree with this assessment.

 The SRX is very quick at disposing of invalid sessions, generally.
  However, it is easily susceptible to DDOS if you let it reach the
 session table.

 Here's some quick POC code:

 http://pastebin.com/FjgavSwn

 You can run this against some non-operational IPs, but present via,
 say, discard route in your config.  You will see the invalid sessions
 rise dramatically via 'show sec flow sess sum'.

Hm.

The test itself is not that complex and I've just checked it. Just wrote
a 10.1.1.0/24 discard and tried to ping it with -f option.

Yes, the number of invalidates sessions increases in the show flow sec
sess sum output, buy here is what happens in flow-trace:

 Jun 26 14:13:25
 14:13:25.049336:CID-0:RT:10.0.0.17/2031-10.1.1.20/6175;1 matched
 filter 1:
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT:packet [84] ipid = 0, @4092e624
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT: flow_process_pkt: (thd
 2): flow_ctxt type 14, common flag 0x0, mbuf 0x4092e400, rtbl_idx = 0
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT: flow process pak fast ifl 71
 in_ifp ge-0/0/0.6
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT: 
 ge-0/0/0.6:10.0.0.17-10.1.1.20, icmp, (8/0)
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT: find flow: table 0x4354f9b8,
 hash 55925(0x), sa 10.0.0.17, da 10.1.1.20, sp 2031, dp 6175,
 proto 1, tok 6
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT:  no session found, start
 first path. in_tunnel - 0, from_cp_flag - 0
 Jun 26 14:13:25 14:13:25.049336:CID-0:RT:  flow_first_create_session
 Jun 26 14:13:25 14:13:25.049774:CID-0:RT:  flow_first_in_dst_nat: in
 ge-0/0/0.6, out N/A dst_adr 10.1.1.20, sp 2031, dp 6175
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:  chose interface ge-0/0/0.6
 as incoming nat if.
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:flow_first_rule_dst_xlate:
 DST no-xlate: 0.0.0.0(0) to 10.1.1.20(6175)
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:flow_first_routing: vr_id 0,
 call flow_route_lookup(): src_ip 10.0.0.17, x_dst_ip 10.1.1.20, in ifp
 ge-0/0/0.6, out ifp N/A sp 2031, dp 6175, ip_proto 1, tos 0
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:Doing DESTINATION addr
 route-lookup
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:  packet dropped, no route to
 dest
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:flow_first_routing: DEST
 route-lookup failed, dropping pkt and not creating session nh: 0
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT:  flow find session returns
 error.
 Jun 26 14:13:25 14:13:25.049814:CID-0:RT: - flow_process_pkt rc
 0x7 (fp rc -1)

Yes, this eats some centeral point resources on SRX-HE or its software
analogue for Branch-SRX. Of course, some resources are also consumed for
internal NPC-SPC forwarding, etc. But I'd say it's rather CPU-cycles,
not the session table memory. Of course, if we issue show security flow
session destination-prefix 10.1.1.0/24, there are no records in the table.

However it is still less resource intensive than, say, processing
packets for which there is an explicit (or implicit) deny policy. And
this is what stateful firewalls are invented for.

So, I agree, any stateful device has a grater potential of being DoSed
than a router. Just because this risk is proportional to the number of
elementary operations, the device performs with a packet, and, by
definition, stateful device is more fragile here.

But, again, it does not depend on the firewall's routing role.  It is a
general cost of using a stateful device.

 Malicious user aside, a legitimate application trying to hit an
 invalid IP would give the same result.
Real-life example is even simpler. Any source NAT box (CGN, enterprise,
whatever). It has, say, a NAT pool, which is also defined as a static
discard and announced upsteam using some routing protocol (does not
matter which). All the IPs are used for NAT, so there are no unused
addresses. Users/subscribers behind NAT use torrents, and, of course,
there are lots of sessions from outside users, trying to establish
connections with your subscribers behind NAT. And the same thing
happens: packets come to the NAT device and fall under the discard route.

Yep, it's a good idea to use devices performing route lookups in
dedicated hardware for massive-scale NAT deployments. Also some
additional techniques like stateless filters and early-aging. And common
sense, as Stefan said.

But it has nothing to do with routing (sorry :)

Moreover. Imagine you don't have a static discard route for NAT pool and
no dynamic routing is used at all. Just a static route for the NAT pool
on the upstream device, pointing to the NAT device, 

Re: [j-nsp] Quick Question About HA Setup

2012-07-17 Thread Pavel Lunin

Ben Dale worte:
 All told though, I think once ISSU/LICU is addressed there'll be very little 
 reason not to cluster them.
In general I disagree. What you describe is basically about firewalls.
But the main issue with a multi-site cluster is the need to pool all
your VLANs to both devices. So you need to use some inter-site L2
switching solution instead of routing. When there is nothing technically
impossible these days, building a resilient L2 network is quite a
difficult task for customers, most of whom have no idea of what all
those VPLS-SHMEEPLS multihoming and L2 OAM things are. And usually such
solutions are much more expensive, which consequently leads to throwing
out important things, lack of redundancy, relying on ISPs, etc.

These cons usually outweigh the pros, IMHO. Say, if you count money, two
clusters, one for each site, with L3 between them is almost always
cheaper and easier than a resilient and scalable inter-site L2. Yes,
policy synchronization can be an issue, but let me use the word 'Junos
Space' to address this (yes, I know… :). Well, seriously, what if you
have three or more sites and want to synchronize your policies and
address objects? This anyway should be managed by some external NMS.
 One thing I've always wondered, but never bothered asking - why is it that 
 you need to be in the root level of the configuration to commit in a chassis 
 cluster?  
This is because of private config used for clusters. But I also don't
understand why it's like this, when all the other mutli-RE platforms use
normal config mode.
 I can't count how many times I've hit this buried 7 stanzas deep in a 
 security policy, and wanting to commit something, then make a change at the 
 same level moments later...
What annoys me even more is the absence of 'commit confirmed' for SRX
cluster.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX: rate-limiting source NAT sources

2012-10-30 Thread Pavel Lunin


30.10.2012 01:55, Jonathan Lassoff wrote:
 Specific sources are mapped via NAT rules to specific egress IPs (for
 IP filtering in some places, outside of the SRXes in question).

 And once in a while, some endpoint will have a legitimate need to open
 up *many* connections (and then NAT states) that pass over this SRX
 deployment.
 Unfortunately, the rate of connection establishment relative to the
 application timeouts means that these heavy users can use up all of
 the ephemeral ports, blocking new flows from becoming established.
Looks like session-limit SCREEN options is what you need:
http://www.juniper.net/techpubs/en_US/junos11.4/information-products/topic-collections/security/software-all/security/topic-43929.html

It's applied per-zone, so if you want to apply different limits to
different NAT pools, you need to put users of different pools into
different zones. Otherwise you only can have a single setting for all
(maybe not that big issue).

Older JUNOS version (10.1 at least, don't know when it's changed but
looks like it did) applied source and dst-limits in a bunch, so if you
needed to only limit per-src you also had to explicitly configure
dst-limit to the maximum number (which is platform dependent) otherwise
it would be applied with a very low default value. As of my quick check
with 11.4, looks like it is changed and you can apply per-src and don't
care per-dst, but I might be missing something, so you'd rather test it.

It's maybe a good idea to also limit the rate of new sessions per second
with tcp-syn knob. Be careful, most default screen option values are for
server-side protection and very very rough, so completely inapplicable
in you case and must be adjusted for each particular scenario.

Another thing you might find useful is called aggressive-aging:
http://www.juniper.net/techpubs/en_US/junos11.4/information-products/topic-collections/security/software-all/security/index.html?topic-60842.htm
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] How reliable is EX multichassis? 3300 and 8200 switches

2012-10-30 Thread Pavel Lunin
Richard A Steenbergen r...@e-gerbil.net wrote:

IMHO multi-chassis boxes are for
 people who can't figure out routing protocols

When it comes to ethernet switching, routing protocols means what? :)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] How reliable is EX multichassis? 3300 and 8200 switches

2012-11-01 Thread Pavel Lunin
31.10.2012 10:38, joel jaeggli wrote:
 On 10/30/12 5:49 PM, Pavel Lunin wrote:
 When it comes to ethernet switching, routing protocols means what? :)
 spanning-tree/trill/l2vpn/NVO and so on.
Right, but if we get back to the particular case of DC/enterprise core,
consisting of two EX boxes, this list gets a bit reduced. Moreover I'd
say this application needs bridging, not lots of pseudo-wires, so l2vpn
won't go as well. Well, not so many options in realty :)

And. As of my understating of Trill (I know something about Brocade's
implementation and almost zero of other vendors) the idea is also to
hide routing and most of the control-plane from the user. Which is not
that different from EX Virtual-Chassis approach (which is also IS-IS
inside) from the point highlighted by RAS. So Trill is not less for
people who can't figure out routing protocols. Yes, it's completely
different in technical details how control-plane state is synchronized
in NSR/NSB in contrast to what I know about Trill, let alone VPLS or,
say, MAC-VPN. But while I also find the way of using protocols and
standalone control-plane processing more elegant from theoretical point
of view (especially when geographically distributed), in the few-box
case everything rather depends on a particular verndor's implementation,
number of bugs and user's ability to maintain.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 no more hash-key option in 12.2?

2012-11-06 Thread Pavel Lunin


Sorry for replying an old thread but my two cents about LB on Trio.

 Please take into consideration that the engineers that designed TRIO LB
 decided to simplify the LB options traditionally available on other
 chipsets, so you may find missing ones under the enhanced-hash-key. TRIO LB
 algorithm is already pretty much sophisticated by default.

The sad part of the story is that the sophisticated behavior (tuned to
provide more granular LB for carrier applications) cannot be turned off
for some special (well, not so specific) cases. What is really needed
for, say, CGN is an ability to ignore everything except source address
for IP traffic. If you want to balance flows among several NAT
next-hops, you need to make all flows from a given source-ip
(subscriber) to pass through the same NAT device.

In the old config branch there is a command set forwarding-options
hash-key family multiservice payload ip layer-3 source-ip-only, but no
analogue on Trio.

Yes, we can use FBF or something like RADIUS-signalled placement of
subscribers into VRFs on the BNG side, but a lot of this complexness
could be avoided just having an option to only hash source IPs and
ignore other fields. It would be even possible to use unequal LB
(signaled with BW-communities), if we needed to balance flows with ECMP
among stateful devices of different capacity.

Pretty same thing happens if you need to scale a DC beyond a single
firewall. If you could have, say, an MX80 doing LB and a bunch of
different firewalls/NAT devices, it would be much more flexible (and
easier) than the traditional FBF approach.

services-loadbalancing family inet layer-3-services source-address
seems to be what is needed but, as I understand, only works for LB among
service interfaces. However it looks like a proof, that Trio can do this.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Weird SRX flow timeout issue

2012-11-12 Thread Pavel Lunin
12.11.2012 15:55, James S. Smith пишет:
 after the first hour (on a brand new session)

 Session ID: 29151, Policy name: vpn-usa2-out-postgres/7, Timeout: 20, Valid
   In: 10.2.2.5/49214 -- 192.168.2.10/5432;tcp, If: vlan.3, Pkts: 3, Bytes: 
 180
   Out: 192.168.2.10/5432 -- 10.2.2.5/49214;tcp, If: ge-0/0/15.0, Pkts: 0, 
 Bytes: 0
 Total sessions: 1

 All subsequent sessions are crated with a 20 second timeout.

The session has not been really created yet. What you see here is an
incomplete session, which never received a SYN-ACK reply from the
server. See Pkts: 0, Bytes: 0 for the reverse wing. SRX sets 20 sec
timeout for such a state and it's OK.

The question is why you don't get replies from the server and which
relation its appearance has to the SRX reboot (if any). First try to
understand whether packets of subsequent sessions ever reach the
server (if now, do they really leave the SRX's interface), then where
the server's replies go, etc.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Weird SRX flow timeout issue

2012-11-13 Thread Pavel Lunin
Julien, what you talk about is an entirely different story. IIRC, SRX
handles TCP RST differently than ScreeOS when a server closes a session. I
don't remember all the details, but something like not passing TCP RST back
to the user, just closing a session or something.

On the user experience side it looks like a hung session in at least
GNU/Linux clients instead an expected connection reset by remote side.

But this has nothing to do with the original topic, which is, I would say,
is something related to broken routing.
13.11.2012 10:08 пользователь Julien Goodwin jgood...@studio442.com.au
написал:

 On 12/11/12 16:03, Tim Eberhard wrote:
  Benny,
 
  I've been working with the SRX since before it was in beta loading it
  up on a SSG550-M and netscreen previous to that. TCP keep alives, or
  any tcp packet that transverses that session has ALWAYS reset the
  timeout. The only time where you would see this not working is if
  you had a situation of asymmetric routing or some time of crazy load
  balancing through firewalls.

 All I can say is that as of late 2009 on branch SRX (specifically
 SRX650, using then-current JunOS, probably 9.5) this was not the case
 with SSH traffic (which IIRC doesn't have an ALG).

 It wouldn't kill the session, just wouldn't extend it.


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] ipv6 autoconfiguration and ::/0

2012-11-13 Thread Pavel Lunin
How about a generated default route on r2?
08.11.2012 12:18 пользователь Lukasz Martyniak lmartyn...@man.szczecin.pl
написал:

 Hi all

 Is it possible to configure router with ::/0 same way as hosts are
 configured with IPv6 stateless autoconfiguration ?

 This is what i would like to achieve:

 R1 --- R2

 I have eBGP between R1 and R2. I can advertise only one ipv6 prefix (but
 not ::/0) to R2 and i can not configure R2. The goal is to force R2 to send
 all traffic by R1 as a default.

 Can router-advertisement in juniper do the trick ?

 Best Lukasz


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Multihoming Using Juniper SRX 240

2012-11-20 Thread Pavel Lunin
- Load balancing all traffic between 2 ISP connections, not sure if its
 possible or not?

 - Send/Receive traffic of some subnets through one ISP and for others
 through other ISP to maximum utilize both ISP links



First thing I must say, in 100% of cases (I am assuming it's an enterprise
application because of the SRX beeng used as an ASBR) these two tasks
aren't really worth it for outgoing traffic. If you wish to balance
outgoing traffic, per-flow ECMP is enough almost everywhere. All those
this subnet goes here and that subnet goes there and consequent FBF's and
PBR's do nothing but increase unnecessary complexity and lengthen
troubleshooting time.

I bet your incoming traffic is at least 10 times greater than outcoming
(well, I might be wrong with 5% probability). What you might really need
is balancing incoming traffic. It means you should play with what you
advertise to influence how the world forwards traffic to you, not what you
forward to the world. Having a single /24 prefix you still have tools to do
so. Prepends and TE communities (if your ISPs support them) allow to
achieve quite a lot.

Also, check the JUNOS Enterprise routing book, it's all there.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Distributing OSPF load on MX80

2012-11-29 Thread Pavel Lunin
29.11.2012, Benny Amorsen wrote:
 Alternative, is BFD cheap on an MX80? If I turn on BFD, I could set the
 OSPF hello timers longer than the current 10 seconds. Of course that is
 no good if BFD just makes even more work for the already-busy routing
 engine.
AFAIK, at least as of 11.something, BFD was handled by RE on MX80, not
the host-CPU like it is on the big MXes. Looks like it's because the
host-CPU on MX80 is quite less quick (marketing way of reading this is
it's more power and heat efficient thus more suitable for mobile
applications, which is what MX80 was first primary designed for :).

The issue has been discussed here about a year ago:
https://puck.nether.net/pipermail/juniper-nsp/2011-March/019245.html

I think it's worth to check if any hellows are dropped by the control
plane protection policers on the way to the RE.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JUNIPER AXC1100

2012-11-29 Thread Pavel Lunin
Hi Giuliano,

 Does anyone has some experience using ACX1100 or any other router from ACX
 family ?

 We are looking for an aggregate router for our network and we are thinking
 to use ACX only with gig ports.

 There is some specific questions about this router:

As what I know, many things are just not ready yet. While the box is
supposed to be a low-scale MPLS router with many of the PE features
needed for real-world including VPLS, L3VPN, policers and stuff, the
current software supports only Martini circuits and a few other things
(checkable with the docs:
http://www.juniper.net/techpubs/en_US/release-independent/junos/topics/reference/general/acx-series-features.html).
 
This is the reason why they are targeting ACX to mobile access market
where PWE3 is more or less enough (base stations aggregation through
MPLS, SyncE, ect).

In a year or so it should look like a real router but now it's just a
very specialized thing.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Distributing OSPF load on MX80

2012-11-29 Thread Pavel Lunin
2012/11/29 Saku Ytti s...@ytti.fi

 On (2012-11-29 20:34 +0400), Pavel Lunin wrote:

  AFAIK, at least as of 11.something, BFD was handled by RE on MX80, not
  the host-CPU like it is on the big MXes. Looks like it's because the
  host-CPU on MX80 is quite less quick (marketing way of reading this is

 I suppose host-CPU means PFE/LC CPU? MX80 has pq3 8544, MPC2 has pq3 8548.
 MPC2 needs to talk to two trios, MX80 only one. So it does not feel like
 MX80 PFE CPU is underpowered compared to big brothers.


From the frequency point of view — yes. Not sure my knowlege is enough for
a full-scale Steve Jobs' style megaherz-myth holywar, but looks like the
8548's built-in ethernet can be the key:
http://www.freescale.com/webapp/sps/site/taxonomy.jsp?code=PCPPCMPC85XX

Well' don't get me wrong, I don't know the right anwer, I only heard some
rumors that MX80 linecard host subsystem is weeker that that of the MPCs.
It might be totally wrong though.

IMHO, a good field way to check whether it's of the same capability would
be to test inline Trio IPFIX performance on MX80. Flow export is done by
this CPU. I've been wondering how powerful MX80 is in inline JFlow since
Juniper announced this feature, but didn't have an opportunity to test it.

--
Regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX, UDP traffic, routing asymmetry

2012-12-06 Thread Pavel Lunin

03.12.2012 07:48, Dale Shaw wrote:
 Does the SRX do something special with asymmetric UDP flows? When I
 say UDP I mean UDP generically, because I'm aware of special cases
 like set security flow allow-dns-reply. I have an ever-growing
 suspicion that we are throwing packets on the floor in certain
 circumstances.
SRX always performs a reverse wind route lookup (to the source IP
address) when processing the first packet of the session and installs
the next-hop to the session table. Subsequent reverse packets fall under
the session context and are forwarded using this next-hop without route
lookups.

But when the reverse wind lookup is performed, SRX checks that the
outgoing interface is in the same security zone as the interface through
which the first packet came from. If zones do not match, traffic is
dropped. So in practice there is no problem with asymmetric flows
through a single device but you must place the both interfaces into a
single zone (a reasonable security constrain, I would say).

Last time I cared SRX did not support artificial symmetrization, based
on using the cached next-hop, though which the packet came from.

I would say the right approach is to readjust the OSPF link costs
assigned to st0.x interfaces to make forward and reverse flows follow
the same tunnel. If, for whatever reason, you really need to forward
traffic so that forward and reverse flows follow different
links/routers, you need to influence the outer header routing, e. g.
playing with the underlying IGP/BGP/TE/ISP manager/etc.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Chassis cluster and forwarding performance

2012-12-10 Thread Pavel Lunin

 (This is for packet mode J-2350) I've been reading on chassis cluster,
 and noted that data plane is active/active mode,  does that mean
 double PPS in general?
As there is no flow state, there is no need for cluster in packet mode.
And it is not supported.

In general active/active data plane for JSRP cluster means you can place
reth interfaces into different redundant groups, where different nodes
are active. There is just no special 'mode' inside JUNOS for
active/passive, and it's totally up to you how to group your interfaces
and behave when one of them fails. You can attract all traffic to one
node or divide it between nodes.

Routing a packet between interfaces placed into RG's where different
nodes are master requires h-shape forwarding (through the fabric link),
that is where double pps occurs. But there is nothing pushing you to
route like this.

In my opinion, nearly 100% of real-word FW cluster implementations need
a single RG1, to which all interfaces belong. This basically means
active/passive data plane behavior.

An only feasible exception might a kind of multi-tenant scenario where
traffic to/from some VLAN's is always passed through a given node. When
a failure occurs performance degrades. So you can reach more performance
under normal conditions. But I'd say it's not worth it because a 2x
hardware overkill is often much cheaper than advanced NOC skills
required  to maintain and troubleshoot the more complex solution.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX Switch Question

2013-01-10 Thread Pavel Lunin

Just don't go there. EX is in no way a metro SP switch.

Very common case, we've been discussing it with many customers, who
their-selves want a Juniper metro SP solution, maybe once a week since
the EX series was launched. After all that I am 100% sure this is not
what EX is all about.

10.01.2013 18:23, Paul Stewart wrote:
 Thanks but this is pure layer2 deployments (typically).  I should have
 clarified that some of the VLAN's have IP but most are to link buildings
 together within a metro etc

 Really, this is more of a metro ethernet type of environment.

 I'll read up again on this but I'm comparing this against a tried and true
 Cisco solution we've used many times and it just worked (rather simply
 too). with what I've seen/read so far on the Juniper front, this is
 going to be a problem to implement - really hoping someone can prove me
 wrong though ;)

 Paul



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX Switch Question

2013-01-11 Thread Pavel Lunin

 So is there anything reasonably priced in the Juniper lineup for this kind
 of situation or do we look at Cisco/other?

If a bunch of MX5's doesn't fit the price expectation, than, I would
say, Cisco/other.

Looks like Juniper just did not much care metro ethernet. BTW, it's
sometimes said, that the reason is a lack of such a market in North
America (where I've even never been to, thus can't judge whether this
sentence is correct :). Another reason (more serious, I would say) is a
strong Juniper's position in the MPLS battle, which doesn't allow them
to invest much into the plain ethernet metro access technologies to not
kill the goose that lays the golden eggs (from the pure engineering
point of view there is also some logics in such an approach).

With the launch of ACX series things might change and a reasonably cheap
JUNOS-based metro ethernet-over-MPLS solution might become available,
but this is a deal of a few quarters ahead, as I understand.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] VPN from SRX to CIsco with more than subnet locally

2013-01-16 Thread Pavel Lunin

16.01.2013 20:46, Anton Yurchenko wrote:
 Juniper solution is to either set up multiple tunnels, one for each
 proxy-id, or to convert the remote side to route-based VPN.
 On the Cisco side it is implemented via VTI, for IPSec traffic have a
 tunnel interface like GRE tunnel and place traffic onto it via routing
 instead of crypto-maps. Very similar to Juniper.
 http://www.cisco.com/en/US/technologies/tk583/tk372/technologies_white_paper0900aecd8029d629_ps6635_Products_White_Paper.html

 http://x443.wordpress.com/2011/11/03/route-based-vpn-between-juniper-and-cisco/


Despite this is pretty obvious and elegant, it's a very common case when
you can't do this for whatever reason. E. g. older IOS could not do VTI
without GRE but SRX cluster could not do GRE until very recent; remote
peer is just too dumb, etc. Sometimes remote side just won't switch to
route-based because they don't know how to or it's a NOC shift with
strict config guidelines that they can break. A very straightforward
workarond for such cases is to add another tunnel to the same peer for
the second pair of subnets. But it requires another global address on
one side.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RR cluster

2013-02-06 Thread Pavel Lunin

While the aforementioned approach (unique IDs and vanilla iBGP in
between) seems a reasonable baseline, the best way in practice depends
on factors like what sort of network the the RRs serve, how much state
they need to hold, whether they are on-line (do carry transit traffic)
or off-line (are just pieces of standalone control-plane).

Good discussion on same/unique cluster-ID can be found in the JUNOS
Enterprise Routing book. Some additional ideas regarding scaling RRs
for large-scale mBGP/VPN installations are in the MPLS Enabled
Applications.

As cluster-ID debate easily turns into a holy-war, I ask to forgive me
this bit of fuel to the fire. But having been watching several networks
with the same ID approach in place for years, I must tell it's fairly
OK, especially if the RRs are topologically close or you don't expect
the network to scale beyond a single pair of RRs. The less amount of RIB
state on RRs is not only less burden to the hardware, but also a bit of
troubleshooting simplicity for NOC guys. In some cases, say, if the RRs
never need to forward traffic to each other, you might even want to omit
the iBGP session between them.

But, again, if you just want to plug and go, the mentioned approach
seems to be the right start.

Regars,
Pavel

06.02.2013 12:17, Huan Pham wrote:
 Aggree with Doug with one condition: RRs do not share cluster ID.

 If the two RRs have the same Cluster ID, then one RR does not accept routes 
 advertised by the other RR which it receives from its clients. It however 
 DOES accept routes generated by the other RR itself.

 As a best practice, keep the Cluster ID unique to maximise the redundancy. 
 Although clients may receive redundant routing updates, no routing loop 
 occurs, nor there is a loop of routing updates. 

 Huan


 On 06/02/2013, at 2:02 PM, Doug Hanks dha...@juniper.net wrote:

 vanilla ibgp between the RRs would work


 On 2/5/13 6:36 PM, Ali Sumsam ali+juniper...@eintellego.net wrote:

 Hi All,
 I want to configure two RRs in my network.
 What should be the relation between two of them?
 I want them to send updates to each other and to the RR-Clients.

 Regards,
 *Ali Sumsam CCIE*
 *Network Engineer - Level 3*
 eintellego Pty Ltd
 a...@eintellego.net ; www.eintellego.net

 Phone: 1300 753 383 ; Fax: (+612) 8572 9954

 Cell +61 (0)410 603 531

 facebook.com/eintellego
 PO Box 7726, Baulkham Hills, NSW 1755 Australia

 The Experts Who The Experts Call
 Juniper - Cisco ­ Brocade - IBM
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] RR cluster

2013-02-07 Thread Pavel Lunin

 I would strongly advise against the previous suggestion of not running
 iBGP between the routers.  While the topology in particular may
 function without it, the next person to come along and work on the
 network may not expect it to be configured
Hmm... really depends. I can easily recall scenarios, where an iBGP
between RRs would increase chances of service outage.

Say, RRs are offline and don't pass traffic at all. They just don't (or
rather, even must not) have any their own routes to attract traffic.
The lack of inter-RR exchange is just an additional security (and
simplicity) measure. No session means no routing-policy, no leakage of
something (e. g. due to software bugs).

Imagine you use RRs to inject initially static routes into the network
(flowspec, blackhole, whatever). There is no reason to share this
between RRs, instead, you want both reflectors to redistribute it
independently.  If your forget to configure these statics on both RRs
(very common mistake), than, having an iBGP between them, you will still
see two copies of the route in the network despite you have configured
it only once. Thus you won't suspect that an RR failure will lead to a
withdrawal of these routes. So I personally would recommend to filter
such updates between RRs even when you have reasons for a session
between them.
 inconsistent with standards and best practices and may, thorough a
 seemingly unrelated change in topology, break things horribly. 
BTW, I don't really believe there exists a common standard or
best-practice prescribing to do this.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 port numbering

2013-03-16 Thread Pavel Lunin
Lol.  Figuring out port assignments shouldn't be like re-living Advanced
 Calculus in high school.

 Thanks to the poster who provided the KB article, that was really helpful.


Things are even worse on the MX80-48T chassis with its duodecimal notation
(JNCIx test authors must appreciate) and totally crazy marking on the
device front. I never can find the right port at the first attempt.

To be honest, vendors could improve quite a lot in this area. Proper design
would save us a good deal of headache. From just sane marking of
port/port-groups on chassis/cards to ID LEDs for ports. For example, I
would prefer if the totally useless traffic LEDs indicated something
sensible instead of just blinking forever. Or, for example, some switches
have LED modes, selected by a front button, for things like highlighting
half/full-duplex ports still in the 21th century. Why not convert this
ritualistic stuff into something more useful, like, say, turn particular
port LED blue when user types request interface highlight ge-blah/blah?
But for some reason vendors don't care and even switches/routers with
ID-shields are rare things despite modern DCs and POPs are becoming more
and more dense.

Same story with craft interface and front panels. Most of them are as
useful as a punched card reader would be at their place.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LLDP on LAG behaviour

2013-03-28 Thread Pavel Lunin


27.03.2013 11:24, Riccardo S wrote:
 Indeed there is a media converter (fiber to copper), but do you think that 
 could be the problem ?
Most copper/sfp media convertors are essentially just two-ports
switches with a sort of hard-coded dumb pass-though config inside. So
they can easily consume some special PDUs, fail to forward jumbo frames,
etc.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX Switch Question

2013-04-01 Thread Pavel Lunin


31.03.2013 18:18, Mark Tinka wrote:
 On Friday, January 11, 2013 01:41:47 PM Pavel Lunin wrote:

 Looks like Juniper just did not much care metro ethernet.
 BTW, it's sometimes said, that the reason is a lack of
 such a market in North America (where I've even never
 been to, thus can't judge whether this sentence is
 correct :). Another reason (more serious, I would say)
 is a strong Juniper's position in the MPLS battle, which
 doesn't allow them to invest much into the plain
 ethernet metro access technologies to not kill the goose
 that lays the golden eggs (from the pure engineering
 point of view there is also some logics in such an
 approach).
 Well, Cisco must be seeing some market that Juniper aren't, 
 considering these are both American companies.

It was just the story as I heard it. I don't know really how right this
statement is (and for me it also seem a bit strange). Generally speaking
American company does not mean make most money in the North America
though I have no idea how different the ratios of NA/other are for Cisco
and Juniper these days.

I think more correctly this idea sounds like Juniper don't believe they
can earn money on ME market for whatever reason. At least for me, not
playing the MetroE game seems to be a conscious decision, not an
oversight. Maybe just same story as the VoIP, where Cisco successfully
played for ages and Juniper gave up completely after a couple of silly
attempts (my special thanks to Juniper for deliverance from the SRX
converged services nightmare).

 When the MX80 was still codenamed Taz, and all we got to 
 test was a board with no chassis, I asked Juniper to rethink 
 their idea about a 1U solution for access in the Metro, 
 especially because Brocade had already pushed out the 
 CER/CES2000,
Well, as I involved into selling both Juniper and Brocade, from this
perspective I can say, Juniper's decision was just right :) I love
CES/CER but when it comes to real competition with MX5-80, there are
really few chances for Brocade. Small MX usually costs about the same in
a comparable configuration. A bit more expensive, but Brocade is by far
#3 in terms of MPLS features and all. Some exceptional niches exist,
also Brocade is going to launch a 4×10GE ports CER, but generally it's
really hard to position CER against MX/ASR and CES against Catalyst ME
or, say, Extreme X460.
 and Cisco were in the final stages of commissioning the ME3600X/3800X, at the 
 time.
Well, I'd also really like to have a Juniper box competing against
Catalyst ME, but, again, I believe there might be (I don't say there
is) some common sense in not even trying to play this game. I can
easily imagine sane reasons for which they decided to spend money and
time on ACX (PTX, QFabric, WiFi) instead of just trying to catch up
Cisco, Extreeme and others in the straightforward ME game.

 Epic fail on Juniper's part to think that networks will 
 still go for too big boxes for small box deployments. 
 The ERBU head promised that they were looking at a 1U MX80 
 box that would rival the Cisco and Brocade options in the 
 access, but I think they thought coming up with the MX5, 
 MX10 and MX40 were far better ideas :-\.
It's really interesting, that you say 2RU is too much for MX80. I never
heard such a claim from a customer. At least, it's never been a serious
problem. Much more often they ask whether it fits into a 60 cm deep rack
and how much power it eats. I think, the lack of this requirement comes
from completely different economics of real estate, access networks
structure and all. In many countries (where telecom markets are
emerging) access network is often an interconnection of places like
this: http://nag.ru/upload/images/20519/1877875733.jpeg In a hell I'd
put there something closely priced to MX5, be it 1 or 2 RU height :) So
people often come up using something extremely cheap in the PoPs (don't
even think MPLS) and aggregate them in a location, where 1 or 2 RU
doesn't make much difference.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX Switch Question

2013-04-02 Thread Pavel Lunin

 I couldn't agree more. Funnily enough when I saw the EX2200C-12 get
 released being both fanless and shallow depth the first use case I
 thought was ME NTU/Small PoP. Front-mounted power would have been
 nice, but hey, I'll deal. There are enough dot1q-tunnelling knobs
 built-in* for most applications, and aggregating this back to an MX
 somewhere would make this a pretty solid design. Port ERPS down to the
 22xx code (once Juniper get it sub 50ms) and this would kick ass. I've
 seen a couple of MX80s sitting in basements where there is little or
 no air-con, and I can't imagine them being long for this world.
 *Having to pay more than the price of the switch to activate EFL to
 use Q-in-Q does dampen the idea somewhat though.
Well, don't get me wrong. MPLS in the access is a lovely thing and I
really understand what Mark from Juniper. Moreover I personally hate all
the plain ethernet garbages in the access (people tend to insert more
tiers on the way from access switch to the MX-like router because they
can't connect every basement to an aggregation point and it just breaks
the whole idea of access network).

I just wanted to note, that from the money point of view at the time
when MX80 was under construction, it /could/ be a wise decision to not
compete against products like CES/CER and make it more router-alike than
a MetroE optimized device.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Clustering J-series across a switch

2013-04-02 Thread Pavel Lunin


02.04.2013 20:47, Mike Williams wrote:
 I don't have the page to hand but there is an option to configure the control 
 link in the old way, using (a?) VLAN (4094 IIRC), otherwise new clusters will 
 use a special ether-type.
Yes, as already mentioned its revertible with set chassis cluster
fabric-link-vlan enable/disable (hidden). show chassis cluster
information will show you actual state.

Also see KB23995 and KB23995.

 Has anyone ever even tried to put a switch between a J-series, or SRX-series, 
 cluster?
Yes, I have, but quite a long time back. It worked for me in a lab with
SRX650 running very early code (9.smthng) but I've never seen it used in
production. AFAIR, some switches (Cisco?) needed to be configured
somehow to not check the payload was IP (not sure if it really was
enabled by default).
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best route reflector platform

2013-04-21 Thread Pavel Lunin
2013/4/21 JP Velders j...@veldersjes.net


 There's also SRX-BGP-ADV-LTU for Advanced BGP License for SRX
 650 only (Route Reflector), though I wonder how far it'll scale...


In SP terms — almost not at all.

1. Branch SRX control plane runs on a single core of the Cavium Octeon,
which is not fast (700 Mhz for SRX650).

2. Branch SRX do not support more than 2G of RAM. Moreover about 700M+ is
preallocated for flow session table and it is not released even when you
switch the box into packed mode (well, at least used to be last time I
checked a year or so ago). Plus JUNOS itself. In practice you have just
about ~700M of free control plane memory on SRX650.

3. Check out the 3 years old The hunt for a better route reflector
threadon this
list in the archives.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Problems with Link Aggregation

2013-04-22 Thread Pavel Lunin

22.04.2013 12:16, Ivan Ivanov wrote:
 http://kb.juniper.net/InfoCenter/index?page=contentid=KB10926actp=searchviewlocale=en_USsearchid=136661818

 Here is written that it uses layer 2,3 and 4 for load balancing hash
 algorithm. And yes, the forwarding-options hash is not configurable on
 EX-series.
http://kb.juniper.net/InfoCenter/index?page=contentid=KB22943

For L3 (bridged and routed) EX switches construct hash value from
several (6, IIRC) least significant bits of L3 and L4 addresses/ports.
MAC addresses (and only MAC addresses) are used for non IP bridged
traffic. So, say, (with an exception of EX8200, which can has labels)
all MPLS streams between a given pair of routers will follow a single
link in spite of labels and payload.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX3600 weirdness

2013-04-23 Thread Pavel Lunin
2013/4/24 James S. Smith jsm...@windmobile.ca

I found that a bit strange myself, but we log all traffic flows through the
 firewall and the only communication going on was on port 993.


Traffic log is a bad clue for that sort of issues, really. You'd need to
use flow traceoptions to check out 1) whether any packet comes to the SRX
at all and 2) if yes, how it is processed within the flow subsystem. Just
start with a filter that narrows down the traffic to two IP address of the
DB and exchange servers (no ports).
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best route reflector platform

2013-04-24 Thread Pavel Lunin
2013/4/24 Richard A Steenbergen r...@e-gerbil.net wrote:


 it either won't work at all, or won't survive for very
 long. And that's after taking a lot of steps to reduce core IBGP mesh
 route load. I haven't touched any of the virtual SRX stuff, does it
 run 64-bit JUNOS?


I haven't either (it's just rumors :) but I don't believe it will be any
better than normal JUNOS from this perspective.

There is another caveat. The difference of boxes like SRX or J (be it
virtual or not) is that their PFE (fwdd process) also holds FIB (not only
the flow session table), which is stored in the same RAM. You might be able
to reduce it with a RIB-FIB policy, though I don't know whether it'll save
any memory. For the Internet routes FIB might be not that huge thing,
especially in comparison with the flow table, but for a VPN environment it
might consume quite a noticeable amount of RAM. One might prefer to just
totally shut up the fwdd on such an RR, but I don't believe it will work
(at least stably).

In fairness I really don't think there is a big market for dedicated
 RR's, so I'm sure it isn't on the top of anyone's radar. That said, it
 is an absurdly easy problem to solve, with almost no work required (ship
 JUNOS on 1U PC and call it JCS100). So besides greatly pissing off many
 of its largest customers, they're leaving money on the floor and forcing
 people into other solutions with other vendors.


Agree.


 I really can't imagine that the benefit of selling an extra MX240 chassis,
 even if sold at
 regular price, is worth the money being lost from everyone else


BTW, does an empty chassis with just an SCB and RE work here? BGP will
definitely run through fxp0 but I haven't ever tried to run an MX with no
line cards. It seems this should work, but does it, anyone tried in
production?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP on logical-system fxp0

2013-04-24 Thread Pavel Lunin

20.04.2013 01:45, Chip Marshall write:
 So, I have an MX5 with it's fxp0 management interface connect to
 one network, which I've placed in a logical-system so it can have
 it's own default route for out-of-band management.

This is what I never understood. Why people want to use fxp0 (or any
other dedicated management) iface for real production management?
Well, of course we need some sort of special management VLAN or routed
infrastructure to separate management from the payload-carrying network,
but what is the reason to bypass data plane and plug it right into the
RR? Even leaving apart all the troubles like discussed in this thread,
implied by impossibility to use a lot of forwarding features (you can't
even connect it to two switches for backup), this deprives you to
protect the RE using hardware filters and policers. Say, I saw a couple
quite serious cases when a crazy trusted NMS DoSed routers with lots
of ICMP probes and SNMP.

In my opinion fxp0 is a thing much like console port, which is useful
when you intentionally need to access the control plane directly (and
this is why you better thing in advance of where it's plugged into and
which subnet it belongs), but as a full-time management interface it
seems to bring more troubles than benefits.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP on logical-system fxp0

2013-04-25 Thread Pavel Lunin
2013/4/25 Brandon Ross br...@pobox.com

Many operators have backbone routers with 10's of 10GbE ports and maybe
 even a few 40 or 100GbE ports at this point, perhaps a variety of SONET
 still, etc.

 Are you suggesting that they should purchase a 10/100/1000 copper card
 just for management?  Or are you suggesting that they should buy 10GbE
 switches for their out of band management network?


No, I propose to not even bother with copper, if you prefer. Just use a
VLAN or any other type of virtual circuit inside those TerabitEthernet /
SONET / FrameRelay / whatever.

And do you propose to use dedicated fibers just for management? Or,
otherwise, how would you pass traffic to/from the copper switch, into which
you plug the fxp0 of a remote router? I bet in 99% of cases, connectivity
from NOC to the OOB switch shares fate with the connectivity to the
router, whose fxp0 is plugged into this switch. Not even mentioning that
this OOB switch is an additional point of failure with little chances to be
backed up.

So there is nothing in the fxp0's OOBness, what is worth its clumsiness.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP on logical-system fxp0

2013-04-25 Thread Pavel Lunin


Well, I agree, if you are brave enough to run a real OOB management
network, you have reasons to use fxp0 on the devices, that don't have 1G
ports, though I believe it's at least not cheaper than buying 1GE ports
just for management :) But in my experience real OOB mgt is a too rare
case in real life of the ISP world. BTW, yes, there is much more sense
in real OOB management in the access, but you first gave an example of
an all 10/100GE core, which is a slightly different thing. And even in
the access nothing really pushes you to use fxp0 for OOB mgt.

However it doesn't really matter, whether and when it's common or not.

If you know what and why you are doing, there is no problem. But most
people, who I talk with about using fxp0, want to use it just because,
with no good reason except it is specially developed by vendors, so I
think, it's better to manage devices through it and they just don't
really understand implications of bypassing data plane.

BTW, I don't say it's useless. When you need to remotely upload software
to a non-operationg box, this is an only option. But I'm sure it's
better to not use it for every day routine management like SNMP, if only
you have an option.


25.04.2013 19:07, Brandon Ross wrote:
 On Thu, 25 Apr 2013, Pavel Lunin wrote:

 No, I propose to not even bother with copper, if you prefer. Just use a
 VLAN or any other type of virtual circuit inside those TerabitEthernet /
 SONET / FrameRelay / whatever.

 So you propose to do away with the out of band network entirely, and
 instead do all of your management inband.  While that is a certainly a
 fine choice for some networks, others do not want to fate share their
 management capabilities with their production traffic.  Is that
 unreasonable?

 And do you propose to use dedicated fibers just for management?

 I don't just propose it, I do it.  Depends on the network of course,
 but if you'd like an example, the network we put together for Interop
 every year has what we call Access Ether.  It is a true out of band
 management network.  It's flat layer 2 (to keep it simple since it
 doesn't have to scale greater than a 100 devices or so), runs on
 completely independent switches from production traffic and runs on
 separate backhaul fiber (or copper in some cases).

 There are operators that run their networks in a similar way.

 Or, otherwise, how would you pass traffic to/from the copper switch,
 into which you plug the fxp0 of a remote router? I bet in 99% of
 cases, connectivity from NOC to the OOB switch shares fate with the
 connectivity to the router, whose fxp0 is plugged into this switch.

 At some level there's fate sharing, sure.  If Darth Vader uses the
 Death Star against Earth, then I'll lose both my inband and out of
 band management capabilities.  If, however, I have a failure of the
 data plane somewhere in my production network, at least I can still
 reach the processor and send it new code or whatever I need to do.

 Not even mentioning that this OOB switch is an additional point of
 failure with little chances to be backed up.

 That's only the case if you completely disable inband management
 capabilities.  Some networks choose to do this, some do not.  For
 those that do not (again see the Interop network) it is not an
 additional failure point at all, but actually redundancy.

 So there is nothing in the fxp0's OOBness, what is worth its
 clumsiness.

 Please don't get me wrong.  I think the way fxp0 is implemented sucks
 big time.  The fact that I have to use the default logical-system to
 make it useful and move all of my production traffic to others is
 super annoying. The lack of other features on that port makes it
 downright dangerous if you don't pay attention, but it is NOT useless.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP on logical-system fxp0

2013-04-25 Thread Pavel Lunin

25.04.2013 19:04, Alex Arseniev wrote:
 Netflow does NOT require encryption as standard (SNMPv3 does). 
Netflow or stateful log export is very often not supported on fxp0 and
analogues. Even if it is, high rate of those logs can easily overwhelm
RE or the link between RE and data plane.
 (a) lo0.0 filter copy is applied to fxp0 as well
It's not in hardware. So, say, the new multistage DoS-protection feature
of MX won't work. BTW, do policers work at all on fxp0? I think they
should but it's a good example of a special need to care, spend time,
etc. Moreover, it can be easily poorly documented or not documented at all.
 The providers I worked with build their OOB networks using same design
 principles as their production networks - never flat L2, routed hops,
 every site has at least 1 (often 2 or multi-staged) firewall(s)
 protecting the rest of the OOB domain from rogue elements.
Even so. Why fxp0? Why not normal interface (given you have it)?

Well, at the end it's not that important (though evident) why OOB mgt
interfaces have their limitations, they just do. And while there are
very few benefits (except some corner cases), there are lots of
drawbacks, which, of course, can be worked around, but what for?


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP on logical-system fxp0

2013-04-25 Thread Pavel Lunin
2013/4/25 Brandon Ross br...@pobox.com

 But in my experience real OOB mgt is a too rare case in real life of the
 ISP world.


 We have very different experiences then.  I'm not claiming it's a
 majority, but I will claim that many of the largest networks in the world
 do, indeed, have true OOB management networks.



Just let me clarify that with real OOB mgt network I mean dedicated
media, switches and all or, more formally, independence of control traffic
from forwarding plane of what it controls.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP on logical-system fxp0

2013-04-25 Thread Pavel Lunin
2013/4/25 Alex Arseniev alex.arsen...@gmail.com


 Correct. Do you expect someone to attack fxp0 from within Your OOB network?
 Rogue NMS server perhaps?


In a big enough network — anything. Broken NMS (it turns out to happen more
often than I could think), malware on management PC, misconfigured
something (IP address of a syslog server), intentional hack, etc. Also,
routing does not mean that you don't have broadcast domains and BUM storms
are not possible.

In that case You have OOB network design problems, see my point below wrt
 OOB design principles.


Even if you have a firewall behind each fxp0 (and how do you manage that
hell of firewalls, another OOB MGT network for OOB management devices? And
we still consider price of a single port? :) — I bet you don't rate limit
SNMP and even ICMP on the firewalls.

Let's be honest, any big ISP have more than one mgt network and they rarely
resemble the Eden. Just because ISPs merge and split, different BU manage
different parts of the network, sometimes BU merge too, clever folks leave
the company and stupid ones  sometimes come, etc.

This is why I'd prefer to have more tools to be sure.


 It is clearly evident that for every vendor product which has management
 built-in interfaces on control modules, these built-in interfaces on
 control modules cannot deliver same features  perf as revenue interfaces.
 Do You have expectations and/or experience/examples to the contrary?



Of course I don't :) This is the thesis I started with, so why should I
expect the contrary?

It becomes even worse, when it comes to multi-vendorness. Different
equipment have different limitations for those ports. And all this makes
the MGT network less and less flexible.

I've once been involved in a project of a centralized monitoring system
deployment for a big ISP. They had about 7 different routed OOB mgt
networks (Core, Access, ATM, SDH, etc), I can't even say it was wrongly
done. But just the need to provide connectivity to everything ended up with
GRE over NAT over GRE over NAT salted with NAT and served through GRE sort
of solutions (not everywhere, but partly). I won't say all or even most,
but A LOT of troubles they had, came from the limitations of dedicated mgt
interfaces.

Of course, as a result, this whole interconnected network was not a thing,
with which you could be sure of nothing bad happens ever. Despite some
parts were originally well designed and run.


Even so. Why fxp0? Why not normal interface (given you have it)?


Because fxp0 is free in a sense that it is included in RE price?


OK, I got it. The main reason is the physical port itself (I should've
asked, what happened to VLANs on normal interfaces, but I won't :).

Well, my point here is just that one must be very conscious and ask himself
twice, whether he knows what he is doing, when choosing dedicated mgt
interface for every-day management.

P. S. I also have doubts that the economical assumption really holds (opex
for administration also costs money) but let's leave this alone.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SNMP on logical-system fxp0

2013-04-26 Thread Pavel Lunin

   [AA] if You actually still dealing with such issues in Your customer
 networks, my condolences. Especially sad is the issue with management PC
 - do Your customers use commodity Windows PC with freeware Solarwinds
 version as NMS?


Yes, my customers (and companies at all) are not always ideal and I don't
want to rely on the contrary assumption :) And it has nothing to do with
the freewareness and windowsness. One particular case of a DoS from NMS
was Micromuse run on Solaris on SUN hardware, configured by true educated
and certified staff. And of course people need PCs to manage networks, and
they don't listen, when I say don't use Windows and sometimes even don't
forget to purchase a new license for your anvi-virus.



  This is why I'd prefer to have more tools to be sure.

 [AA] Not sure if I follow, if BUs are administratively separate, how
 having a true OOB interface (i.e. CMP in Routing Engine) would make Your
 life easier?


No, I didn't mean true OOB interface. I meant real HW [sub]interface.
(The prefix sub is what allows to not buy 1G cards specially for
management :) I can protect it with true HW policers and ACLs, apply true
CoS, if I need, put it into a VR, use FBF, VLAN rewrite, bundle two ports
in a LAG, be sure I'll find in the docs whether something is supported,
etc. I even want the management network to be connected to the Internet
(OMG), through a firewall, of course.

CMP on routing engine would help to do a lot of other things, but it's not
for everyday routine management either. Just like nobody uses iLO/IRMC/etc
on servers to manage them.

So, yes, I am not talking about built from scratch ideal management
networks run by ideal people in spherical vacuum. If the national ISP,
you've mentioned, is an example of such a paradise (though we both know
it's not :), I can only envy them.

OK, I think it's enough for this topic. Sorry for kindling the flame.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX1400 opinions

2013-04-28 Thread Pavel Lunin
Hi James,

So basically SRX1400 will do fine as BGP router + firewall?


Yes, it will though using a stateful firewall as ASBR has implications:
traffic must go symmetrically, meaning forward and reverse flow of a given
session must always go through same ASBR. In practice, it means that either
you have a single stateful ASBR (clustered for redundancy) or you better
build external routing domain with dedicated routers. Rule of thumb: if
your AS has a single site with all external links terminated there — OK to
use a firewall, if you have 2+ sites with external links here and there —
you need routers.

A thing to consider about SRX1400 is its price/performance in comparison to
SRX650. If you look at the performance numbers, you'll see it differs not
as much as the price :) In terms of bps and concurrent sessions they are
about of same capability. In terms of pps and cps SRX1400 is (IIRC) about
1.5 times more powerful. So in case of a limited budget, I would recommend
to consider two SRX650 with clustering (if you wish, even active/active,
though I think it's no use for most cases) instead of a single SRX1400. In
this case you wull also need additional interface cards (not that
expensive), as clustering consume three ports on each node.

On the other hand, SRX1400 is a hardware box with dedicated hardware for
control and data plane, some screen options (way not all) are done in the
packet ASIC, etc. So for the DC environment SRX1400 can be a better choice,
especially if you are going to have more full BGP feeds in future and/or
serve cps-intensive or short-packet applications. So if two boxes fits your
budget, this might be a better way.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX 3600 dropped packets - how to debug?

2013-05-27 Thread Pavel Lunin


22.05.2013 21:01, Phil Mayers wrote:
 How can I determine what the dropped packets are, and why they're
 being dropped? 

show interfaces extensive and check out Flow error statistics
(Packets dropped due to):
Another place to look at: show security screen statistics zone/iface.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX 3600 dropped packets - how to debug?

2013-05-27 Thread Pavel Lunin
27.05.2013 13:44, Phil Mayers wrote:

 show interfaces extensive and check out Flow error statistics
 (Packets dropped due to):

 Nothing in there corresponding to the numbers/rates I'm seeing on the
 show security flow statistics
If users are complaining, try to understand what exactly they have
problems with. Figure out a particular pattern for possibly dropped
packets (like s/d addresses, etc) and catch it with security flow
traceoption + packet-filter. Most probably you'll see the reason of a
drop there.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX 3600 dropped packets - how to debug?

2013-05-27 Thread Pavel Lunin


24.05.2013 19:05, Alex Arseniev wrote:
 If You run any kind peer-to-peer apps (uTorrent, eMule, etc, also
 includes Skype) then You'll see that outside peers trying to establish
 LOADS of unsolicited connection to Your inside hosts.
 And all of them will be dropped unless You enable full cone NAT. 

A bit off topic, but seems to be worth to note here as I've seen it
several times.

Often people don't have a route for source NAT pools (especially in case
of static routing). This leads to the following. When a disallowed
connection from outside comes, it matches a default route, than a policy
checkout occurs and, if untrust-to-untrust policy permits it (for some
reason; say, folks managing NAT for broadband access tend to not bother
with policies and just permit all everywhere), you have 1) a routing
loop 2) session table flooded with this trash. Even if there is no
permitting policy for untrust-to-untrust, this anyway leads to
additional performance consumption due to policy checkup. So the best is
to nail it down with a route like nat-pool/xx - deny in order to drop
the unwanted incoming connections as early as possible.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX Screen not working

2013-05-30 Thread Pavel Lunin

30.05.2013 04:41, Luca Salvatore wrote:
 However, we recently had an attack on one of our customers where there was 
 around 400,000 sessions to a single IP address, as shown:

 show security flow session summary destination-prefix 202.x.x.x
 node1:
 --

 Valid sessions: 5
 Pending sessions: 3
 Invalidated sessions: 384356
 Sessions in other states: 0
 Total sessions: 384364

 Any idea why the screen wasn't blocking this?

Most likely invalidated sessions just don't count. To clean up the
session table you better use the aggressive-aging feature. Session
limits per source/destinations are useful when you need to limit users
with a given number of sessions (say, a single torrent client can open
tens of thousand of connections if not limited), but are pretty useless
for DDoS protection (as most of the screen options are though, see below).

Also check out the icmp/udp/syn-flood screen options. On HE SRX the
flood-protection options are performed in hardware (NPC) in contrast to
the session-limits, which are done by SPC, as the NCP have no idea of
what a session is. It might require some basic magic to set the
parameters, as the flood thresholds are measured in pps/cps in contrast
to the number of concurrent sessions. But if you know the mean session
length, which is quite a stable value for a given service and easy to
figure out with session logs, it's worth nothing to calculate the CPS
rate by division of the number of concurrent sessions by it. Take care,
syn-flood parameters are applied in a bunch, so if you leave some of
them unconfigured, you'll get the default values applied, and most of
the are inappropriate for anything from the real world.

This said, the screen options don't much help to protect real services
against real DDoS, as they only narrow down the hole and move the jam a
little further from the server but are unable to distinguish legitimate
packets from the attack. However they are still useful to protect the
firewall itself (just a bit more advanced loopback policers).

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX Screen not working

2013-05-31 Thread Pavel Lunin

31.05.2013 01:50, Luca Salvatore wrote:
 Thanks for the info. The attack we recently saw was using IP protocol 3 (GGP) 
 which is not specifically permitted so I'm unsure how it was allowed to 
 create a session in the first place.
The right way is to catch a sample attack packet with flow traceoptions
and check why the SRX even bothers to create a session. If you have no
policy, it should not (accurate within no bugs assumption). But who
knows? Only way to know for sure is to trace it.
 Does the session limit screening only apply to TCP/UDP?
Not sure, but I believe no. It should count all active sessions for a
given IP.
 Also what is the definition of an invalidated session?
1. Most common case is a session created for one wing. When, say, a
TCP SYN passed from client to server, but no response from the server is
yet received. Most probably this is your case. Such sessions have, IIRC,
10 seconds timeout.
2. When a TCP RST is sent by one of the parties and the
tcp-invalidates-session option is on.
3. I've heard that, when a normal closing condition arises, session is
also first marked invalidated for a very short period before it's
actually teared down. But I don't know much details about that (can
someone on the list elaborate?) However this should not lead to that
high value of invalidated sessions counter .

As I understand, this is a sort of a queue where candidates for
tear-down are placed. And there is a subroutine (like a scheduler),
which scans through the invalidated sessions and decides if it's time to
pull a particular one down and return resources to the pool or maybe
clear the invalidated state and thus make it active back or for the
first time.

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] routing instances - ospf - summarization

2013-06-05 Thread Pavel Lunin
05.06.2013 09:57, n f wrote:
 I can export between routing instances, but I'd like to know if it was 
 possible to export via ospf from RI2 to other physical routers
 a single route that summarizes all the local routes I have. (like 
 10.10.0.0/16, as all the routes I receive via ospf on RI2 from RI1 

This is rather about protocol-independent policies, not OSPF.

You can use a static route like this: set routing-instances RI2
routing-options static route 10.10/16 next-table RI1.inet.0;

But in one direction only, so you can't add a static route in the
backward direction as JUNOS will reject to commit claiming a possible
RI1-RI2 loop.

Depending on your needs, you can have a static default from the internal
RI to external RI (or master instance) and exported local/OSPF routes
from the internal RI's to external/master RI for the backward connectivity.

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] I've got some bone head problem on an srx...but I don't see it.

2013-06-13 Thread Pavel Lunin


12.06.2013 08:59, Morgan McLean wrote:
 I rolled back and ran a ping to a host out on the net. Heres the trace...is
 the fact that its coming from junos-self screwing things up?
The trace shows no src nat happened:
 Jun 11 21:51:22 21:51:21.1472397:CID-1:RT:flow_first_routing: call
 flow_route_lookup(): src_ip 192.168.29.11, x_dst_ip 192.81.130.21, in ifp
 .local..0, out ifp N/A sp 8, dp 207, ip_proto 1, tos 0
[...]
 Jun 11 21:51:22 21:51:21.1472397:CID-1:RT:flow_first_src_xlate:
  nat_src_xlated: False, nat_src_xlate_failed: False

 Jun 11 21:51:22 21:51:21.1472397:CID-1:RT:flow_first_src_xlate: src nat
 returns status: 0, rule/pool id: 0/0, pst_nat: False.

 Jun 11 21:51:22 21:51:21.1472397:CID-1:RT:  dip id = 0/0, 192.168.29.11/8-
 192.168.29.11/8
This means you were sending packets to the Internet from the source IP
192.168.29.11.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper EX3200 and vrf lite

2013-06-17 Thread Pavel Lunin

17.06.2013 15:03, Per Granath wrote:
 http://www.juniper.net/techpubs/en_US/release-independent/junos/topics/concept/ex-series-software-features-overview.html#layer-3-protocols-features-by-platform-table

 Supported from 12.3R1, but without PIM, IGMP, multicast in the VRF.
Looks like you confused EX3200/4200 and EX2200/3300. The OP is about
EX3200, where VRs are supported for ages (since 9.2) and multicast is
also there. No license is needed.

On EX2200/3300 VRs also require a EFL.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper EX3200 and vrf lite

2013-06-17 Thread Pavel Lunin
17.06.2013 13:40, Victor Sudakov wrote:
 Does EX3200 support the VRF lite feature? 

 I have read the datasheet at
 http://www.juniper.net/us/en/products-services/switching/ex-series/ex3200/
 and could not find VRF lite.

Virtual Router is how VRF lite is called in Juniper's parlance.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX550 Mode Packet Based for BGP Full Routing

2013-06-21 Thread Pavel Lunin


20.06.2013 21:37, Giuliano Medalha wrote:

Has anyone used the SRX550 in packet based mode for border router with BGP ?

Considering the datasheet it only supports 712k BGP routes.


This is not a hard limit. Just an officially supported value.

I have some customers who use SRX650 as an enterprise border with two 
full tables and it works pretty OK for that needs. I think, SRX550 also 
will.


Having this said, I see very little reason to use SRX for this. For an 
enterprise network—you just don't need full tables. If it's an ISP, 
than, I'm sure, MX5 will cost not that much more, while it's a true 
router with dedicated forwarding hardware and all.


Despite SRX650/550 and even 240H2 have 2GB of RAM, all branch SRXes, 
even being converted into packet-mode, reserve a hell of memory for 
fwdd's session table, which is unusable for rpd and other processes. 
Moreover FIB is also stored in this same RAM as forwarding is in 
software. Given the exponential growth of the Internet BGP table, this 
is not going scale in the long term.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP Multipath

2013-07-22 Thread Pavel Lunin


19.07.2013 10:54, Mark Tinka wrote:

Why not prepend the routes out of ur gigE? That way it
becomes the backup link and u can turn down bgp anytime
you want on the gigE without impact

Because many ISP's LOCAL_PREF the hell out of customer
routes, where prepending would be ignored. However, this
isn't always the case, so the OP can always try it.


I'm afraid this explanation needs to be expanded a bit. High LP on the 
ISP side for customers' routes is a common practice, but this makes the 
perpended AS-PATH (and other BGP attributes) ignored only within the ISP 
AS. So traffic from the backup ISP customers, DCs, etc will come 
directly through their link despite the prepended path. However, when 
they advertise the prefixes to the world, AS-PATH will come back into 
play, thus making the prepend work. If the 1G-ISP does not support LP 
altering communities, it might be a compromise solution to receive the 
traffic generated by the 1G-ISP itself directly though 1G link, but keep 
the rest of the Internet on the 10GE using AS-PATH prepend.


As Mark said, community signaling for LP is a more ideal way 
technically, but the problem is that not all ISP support them. Often 
NOCs don't even know what it is. I've seen an ISP (not that small one), 
who eventually stopped supporting LP communities silently. Reply to the 
customer's complain was: It's not needed anymore, as we upgraded our 
routers to more advanced gear. They fixed it at the end, but it was 
worth a week-long scandal. So, I'd recommend to still use prepends 
together with the community signaled LP, just in case.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Filter-based VLAN membership

2013-07-22 Thread Pavel Lunin

16.07.2013 04:21, Dale Shaw wrote:

The desktop/end-user folks are looking at using Microsoft's MED-V
platform to support legacy apps on a new Windows 7-based SOE. From
what I can tell, MED-V is basically an instance of Windows XP running
in Virtual PC.

The desktop guys are telling me that dot1q-tagging the traffic from
the VM isn't supported, nor can they cope operationally with NAT
between the guest and host, so I'm looking at other options for
separating this traffic, if for no other reason than to avoid the need
to re-design the IP addressing plan to support larger subnets.



Looks like you rather need MAC-based VLAN, not filter-based.

http://www.juniper.net/techpubs/en_US/junos12.2/topics/task/configuration/authentication-static-mac-bypass-ex-series-cli.html

(Despite the config stanza, it has virtually nothing to do with the 802.1X.)

Note, you can set a mask length for MACs, that will match all VMs with a 
single config line. Or you can make EX to ask RADIUS for a VLAN-ID of a 
given MAC.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP Multipath

2013-07-23 Thread Pavel Lunin


23.07.2013 16:16, Mark Tinka wrote:

I'm afraid this explanation needs to be expanded a bit.
High LP on the ISP side for customers' routes is a
common practice, but this makes the perpended AS-PATH
(and other BGP attributes) ignored only within the ISP
AS.

Yes, this is true.

However, if your upstream handles a large portion of the
Internet, that's is a reasonable amount of traffic,
particularly if your secondary ISP is not as well-connected
as the other ISP.



Yep. This consideration implicitly means that an obvious (at the first 
glance) idea to buy a broader link from a cheaper and smaller ISP and a 
narrower one from a larger and usually more expensive ISP is apparently 
wrong. Very frequent mistake enterprise network guys tend to make.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX devices upgraded to 2GB ram

2013-07-23 Thread Pavel Lunin


22.07.2013 19:09, Gavin Henry wrote:

This is the info we got from our supplier in UK who is a Juniper Elite
partner:

 It's the same functionality and operation, just comes with 2G memory.
It's part of a general refresh of the line that Juniper are doing just now
to support future applications.

Not sure how close those applications are. Some more UTM apps?



That's true, but there is a caveat (apart from the minimal JUNOS version).

Although I haven't asked local SE, I'm sure the -H2 models won't cluster 
with the old SRXes. So people planning to upgrade their already 
installed SRXes to clusters must either speed up or they will need to 
buy two boxes instead of one. EoL for old models is 31 Dec 2013 but the 
channels will obviously hold the new models as soon as they get rid of 
old stocks.



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX devices upgraded to 2GB ram

2013-07-24 Thread Pavel Lunin



Not even remotely true - I have a 240H and a 240H2 clustered together on my 
desk right beside me - no issues..


Wow, thanks.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX devices upgraded to 2GB ram

2013-07-24 Thread Pavel Lunin


Not even remotely true - I have a 240H and a 240H2 clustered together 
on my desk right beside me - no issues..


Wow, thanks.



Though I wonder if you can cluster SRX***B with and H2 one. They are not 
shipping to Russia yet, so I can't check myself.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IP Monitoring/Tracking (SLA) on high end SRX

2013-08-21 Thread Pavel Lunin
Some folks used to be happy with this event script + RPM probes (though I
myself hate to rely on scripts):

Event script:
http://senetsy.livejournal.com/5093.html

More realistic RPM-probe example:
http://senetsy.livejournal.com/5262.html

Feel free to just ignore the Russian text in the blog (nothing really
important).



--
Pavel Lunin
Senetsy,
Moscow

+7 495 983-05-90, ext. 109
http://www.senetsy.ru


2013/8/18 Morgan McLean wrx...@gmail.com

 I guess technically any protocol would help in this case...regardless of
 default advertisement. I'm guessing no protocols are possible?


 On Sun, Aug 18, 2013 at 2:46 AM, Morgan McLean wrx...@gmail.com wrote:

  Could you ask them to use a routing protocol and implement some logic on
  their side dictating when they advertise a default to you?
 
  Morgan
 
 
  On Sat, Aug 17, 2013 at 2:40 PM, Ahmad Hasan barakat-ah...@hotmail.com
 wrote:
 
  BFD is not a valid option because the other end is cisco and the
 customer
  is
  using PBR, so based on their feedback they can't implement BFD along
 with
  PBR on cisco side
 
  BR,
  A.Hasan
 
  -Original Message-
  From: Chris Kawchuk [mailto:juniperd...@gmail.com]
  Sent: Friday, August 16, 2013 3:47 AM
  To: Ahmad Hasan
  Cc: juniper-nsp@puck.nether.net
  Subject: Re: [j-nsp] IP Monitoring/Tracking (SLA) on high end SRX
 
  How about a default 0.0.0.0/0 with a bfd-liveliness detection.
 
  We use this for conditionally routing statics every now and then.
 
  Works well assuming the next-hop supports BFD; and no dynamic routing
  protocol needed.
 
  - CK.
 
 
  On 16/08/2013, at 7:15 AM, Darren O'Connor darre...@outlook.com
 wrote:
 
   You could run VRRP on R1 and R2 giving R1 the higher priority. Have
   the static default on the SRX3600 pointing to the VRRP IP
  
   Darren
   http://www.mellowd.co.uk/ccie
  
  
   From: barakat-ah...@hotmail.com
   To: juniper-nsp@puck.nether.net
   Date: Tue, 13 Aug 2013 13:16:49 +0300
   Subject: [j-nsp] IP Monitoring/Tracking (SLA) on high end SRX
  
   Dear All
  
  
  
   We have high end SRX3600 connected to layer2 switch and then to two
   gateway routers, and we have a static default route to R1 with
   default preference
   (5) and to R2 with preference 10, we need to track the IP of R1 so
   that in case we couldn't ping R1 the route to prefer R2 i.e. in case
   the link between R1 and the layer2 switch went down the route to
 prefer
  R2.
  
   Noting that we were able to accomplish the above requirements with
   branch SRX using either two options (set services ip-monitoring) or
   (set services
   rpm)
  
   Unfortunately the mentioned commands are not supported on high end
   SRX
  
   Any ideas.
  
  
  
   Regards
  
   A.Hasan
  
  
  
  
  
   ___
   juniper-nsp mailing list juniper-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/juniper-nsp
  
   ___
   juniper-nsp mailing list juniper-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp
 
 
 
 
  --
  Thanks,
  Morgan
 



 --
 Thanks,
 Morgan
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Drawbacks when using QFX5100 and EX4300 in mixed VCF mode

2014-08-21 Thread Pavel Lunin
19.08.2014 19:51, Sebastian Wiesinger wrote:
 ---
 Hardware Requirements for a Virtual Chassis Fabric

 A VCF can contain up to four devices configured as spines, and up to
 twenty total devices.

 All spine devices must be QFX5100 devices. We recommend optimizing the
 performance of your VCF by also configuring QFX5100 devices as your
 leaf devices. A non-mixed VCF has the highest port density and feature
 support for a VCF. Nevertheless, you can configure any combination of
 QFX5100, QFX3600, QFX3500, or EX4300 devices into leaf devices within
 your VCF.
 ---

 Any idea what they are talking about? What drawbacks does a mixed VCF
 have in practice?

 One thing seems to be that you lose the ability to have output filters
 on interfaces but that's all I know.


I guess, there is a lot of marketing here. Of course, many features are
scaled down to the so-called least-common denominator, as guys have
already mentioned above. But this is quite not surprising and doesn't
mean you must use a more featured product everywhere just because it's
more featured.

What about QFX3500, AFAIK, there's just no point to use them at all as
from the price/performance/features perspective QFX5100 is just better
except maybe some corner-cases (which I am not aware of). Using QFX3600
as leafs when spines are QFX5100 is non-reasonable from the pure
performance PoV as QFX3600 is a 40GE switch and QFX5100 is 10GE. What
about EX4300—of course any vendor would say you better buy 10GE switches
instead of 1GE just because 10GE switches have much more performance.
And this is totally correct, isn't it? ;)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] layer 2 sp services

2017-07-28 Thread Pavel Lunin
Aaron, just open "MPLS Enabled Applications" or/and "MPLS in the SDN era".
Both are fantastic books. It's all there :)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Many contributing routes

2017-08-12 Thread Pavel Lunin
BTW, I personally think that even aggregate routes bring more headache than
benefits, let alone generate.

Classic case is using aggregate to generate your own public prefixes and at
the same time having a loopback address out of this range. Or a static
route. Or a connected subnet. Theoretically you can sort this out with
policies, but it's very error-prone.

These routes tend to be relatively stable, so NOCs never deal with the
underlying dynamism and often forget to update policies, when adding static
routes/whatever.

Generate is even clumsier, all this "WTF if my next-hop?" tie-breaking
stuff is the best way to the unmanageable mess.

As of my opinion, static floating (preference 999) discard is your friend
for this kind of aggregation.

In addition, in the case of Internet, it's always a good idea to have a
static floating discard, otherwise you have an implicit static floating
REJECT as prescribed by RFC1812 (see your show route forwarding-table) and
all the corresponding risks to DoS your uKernel MPC CPU.

Regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX4550 (Un-)known unicast flooding at session start for up to 100ms

2017-08-12 Thread Pavel Lunin
Not sure this is specifically related to the OP but there is a known
> hardware limitation/feature of some Broadcom chips used in many EX switches
> of the previous generations (including EX4500).
>

As some you have already pointed out off-list, it's Marvell, of course, not
Broadcom )
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX4550 (Un-)known unicast flooding at session start for up to 100ms

2017-08-12 Thread Pavel Lunin
Not sure this is specifically related to the OP but there is a known
hardware limitation/feature of some Broadcom chips used in many EX switches
of the previous generations (including EX4500). They use a hash table to
reduce MAC lookup length, which was too small in some older JUNOS versions,
leading to hash-collisions and consequent mac learning failures in some
scenarios.

Links, worth to check:

https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/mac-lookup-length-edit-ethernet-switching-options.html
https://forums.juniper.net/t5/Ethernet-Switching/EX4500-show-ethernet-switching-hash-collisions/td-p/204849
https://yingsnotebook.wordpress.com/2017/03/20/mac-address-learning-problem-in-juniper-ex-switch/




2017-08-12 2:14 GMT+02:00 Aaron Gould :

> Sorry to hear that.
>
> I may have mentioned previously to someone else on the list... my EX4550's
> are rock solid... over 4 years uptime... I run our mirrored/redundant data
> centers behind them (hp 3par, hypervisors, etc) and our internet cdn caches
> behind them too...(Akamai, netflix, and that other one)
>
> A pair of 4550's in top of racks virtual chassis'ed together
>
> Here they are...
>
> root@sabn-dcvc-4550> show system uptime | grep "up|fpc"
> fpc0:
> 7:07PM  up 1516 days,  5:51, 0 users, load averages: 0.09, 0.11, 0.08
> fpc1:
> 7:07PM  up 1516 days,  5:51, 2 users, load averages: 0.22, 0.17, 0.16
>
>
> root@stlr-dcvc-4550> show system uptime | grep "up|fpc"
> fpc0:
> 7:09PM  up 1520 days,  4:21, 0 users, load averages: 0.11, 0.15, 0.14
> fpc1:
> 7:09PM  up 1520 days,  4:42, 1 user, load averages: 0.25, 0.22, 0.18
>
>
> root@stlr-dcvc-4550> show version | grep "fpc|model|boot"
> fpc0:
> --
> Model: ex4550-32f
> JUNOS Base OS boot [12.2R4.5]
> fpc1:
> --
> Model: ex4550-32f
> JUNOS Base OS boot [12.2R4.5]
>
> run a bunch of vlans/stp...
>
> {master:1}
> root@stlr-dcvc-4550> show spanning-tree bridge brief | grep
> "protocol|Vlan"
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 12
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 1000
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 10
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 11
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 210
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 204
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 202
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 201
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 14
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 13
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 1006
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 101
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 921
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 2
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 2202
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 3
> Enabled protocol: RSTP
>
> STP bridge parameters for VLAN 4
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Inline IPFIX sampling rate

2017-07-25 Thread Pavel Lunin
Hi list,

It's been long time I didn't write here :)

Does anyone know in details how sampling-rate works (if it does) for inline
IPFIX on MX/Trio?

AFAIR sampling rate config hadn't had any meaning for inline IPFIX until
some version of JUNOS. It used to be always 1, no matter what you have in
the config.
Question 1: If correct, which version it was when it's changed?

As of the 2nd edition of the MX book, "input rate" must follow the formula
“input_rate=2^n-1”. But it's somewhat poorly explained what it means.
Question 2: Do I correctly understand that the value, configured with
”sampling instance  rate ”, must conform to this rule. Here  is
not the index of power of two (aka n in the formula above), it is the
actual sampling rate (result of the formula), right?

Question 3. Given that I correctly understand the previous point, what if I
configured something that does not conform to this rule e. g. "rate 100"? I
suspect that it should lead to "rate 1" being programmed in the PFE but
can't find a way to check this out. Anyone knows a uKernel command to
verify what is actually applied at the PFE level? It seems that "show
sample instance summary" just replicates the config values.


Regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Inline IPFIX sampling rate

2017-07-26 Thread Pavel Lunin
2017-07-25 21:41 GMT+02:00 Scott Granados :

> I can confirm the rate=1 regardless of behavior up through 13.2 up through
> the PFE programming blocked by flow PR when that bug was addressed.
>

To be honest I didn't understand. You mean, the famous "rate is always 1
regardless of confing" is in fact a bug, not a feature?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] packet checksum error in input fab_stream

2017-08-01 Thread Pavel Lunin
> I raised the same point to Joerg in another forum and suggested JTAC,
> if I had to venture a guess, perhaps PFE_ID 128 is actually RE
> injected packet?
>
>
I though of this, but it shouldn't come through the fabric.

JTAC is always an option but, as it is I-chip, it might be not supported.
And… well, I'd do a lot to not find myself discussing THIS with JTAC :)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] packet checksum error in input fab_stream

2017-08-01 Thread Pavel Lunin
+1 to Saku.

This means your egress PFE, identified by FPC  and ICHIP(),
receives corrupted cells from the fabric. It might be caused by ingress PFE
or fabric chips or even backplane lines.

In this case you see it on different fabric streams, so it means that this
is not caused by a single faulty ingress PFE, otherwise you'd see the same
fab_stream IDs on the egress side (this is were the error is detected).
Stream 4 is from FPC1-PFE0, stream 7 is from FPC1-PFE3 and so on (STREAM_ID
= FPC_ID × 4 + PFE_ID).

Some strange thing here is PFE_ID 128. This should be a kind of dummy ID
(MX960 max PFE_ID is 47), I don't know what it means. Maybe some I-CHIP
specificity, e. g., as I remember, they looped packets, switched between
two interfaces on the same PFE, through the fabric as well. Might be
related to something like this or a cosmetic bug in the log.

Yes, j-cell corruption might be caused by elements, affected by
environmental conditions like high temperature.

If this happens frequently and you don't have a spare SCB, you might want
to play around with disable/enable fabric planes to isolate them one by one
and see if the error persists. This is disruptive and requires certain
level of understand of how all this works.

Take a look at KB20741.


2017-08-01 4:22 GMT+02:00 Joerg Staedele :

> Hi Guys,
>
> i get several messages like:
>
> Aug  1 03:34:13 xxx fpc0 ICHIP(3)_REG_ERR:packet checksum error in input
> fab_stream 4 pfe_id 128
> Aug  1 03:34:19 xxx fpc2 ICHIP(0)_REG_ERR:packet checksum error in input
> fab_stream 7 pfe_id 128
> Aug  1 03:36:24 xxx fpc7 ICHIP(3)_REG_ERR:packet checksum error in input
> fab_stream 6 pfe_id 128
> Aug  1 03:36:24 xxx fpc7 ICHIP(3)_REG_ERR:packet checksum error in input
> fab_stream 17 pfe_id 128
>
> JunOS 14.1R8 on MX960 (with dual RE-2000)
>
> It does not seem to have any effect.
>
> Anyone knows what exactly it means?
>
> Kind regards
>  Joerg
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] vlan-ccc between ACX5048 and EX4500

2017-08-23 Thread Pavel Lunin
Try to add no-control-world under your l2circuit/neighbor/interface config
on both sides (as well the VLAN push/pop on the ACX side).

2017-08-23 15:16 GMT+02:00 Jay Hanke :

> No luck. Here is the config I tried
>
> unit 517 {
> encapsulation vlan-ccc;
> vlan-id 517;
> input-vlan-map pop;
> output-vlan-map push;
> }
>
>
>
> On Wed, Aug 23, 2017 at 6:38 AM, Enoch Nyatoti  wrote:
> > Hi,
> >
> > Can you try the following on your ACX5048 interface:
> >
> > set interfaces xe-0/0/46.517 input-vlan-map pop
> > set interfaces xe-0/0/46.517 output-vlan-map push
> >
> > Best regards
> > Enoch Nyatoti
> >
> >
> >
> >
> > On Wednesday, August 23, 2017, 7:51:21 AM GMT+1, Jay Hanke
> >  wrote:
> >
> >
> > I'm having an issue where I'm unable to pass traffic through a
> > vlan-ccc between an ACX5048 and an EX4500. Other circuits on same
> > boxes are working fine. The other circuits are ACX to ACX and EX to
> > EX.
> >
> > PE1 (ex4500)
> >
> >
> > l2circuit {
> > neighbor 10.200.xx.xy {
> > interface xe-0/0/1.517 {
> > virtual-circuit-id 1171;
> > mtu 9000;
> > encapsulation-type ethernet-vlan;
> > }
> > }
> > }
> >
> > xe-0/0/1 {
> > vlan-tagging;
> > mtu 9216;
> > encapsulation vlan-ccc;
> > unit 517 {
> > encapsulation vlan-ccc;
> > vlan-id 517;
> > }
> > }
> >
> > PE2 (ACX5048):
> >
> > l2circuit {
> > neighbor 10.200.xx.xx {
> > interface xe-0/0/46.517 {
> > virtual-circuit-id 1171;
> > mtu 9000;
> > encapsulation-type ethernet-vlan;
> > }
> > }
> > }
> >
> > xe-0/0/46 {
> >
> > vlan-tagging;
> > mtu 9216;
> > encapsulation flexible-ethernet-services;
> >
> > unit 517 {
> > encapsulation vlan-ccc;
> > vlan-id 517;
> > }
> > }
> >
> > I'm learning mac-addresses through the link on the PE2 side.
> >
> > The interface connected on the PE1 side doesn't learn anything.
> >
> > show l2circuit con shows the PW up on both PE devices.
> >
> > I've reloaded software on the EX4500 with the same result. There is
> > one P router between the PE devices.
> >
> > All my existing PW are between EX4500 or between ACX. We don't have
> > any working between systems with different models acting as PE
> > devices. But quite a few working where PE devices are the same type.
> >
> > Any ideas?
> >
> > Jay
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Moving onto EX2300

2017-09-20 Thread Pavel Lunin
VRs on a basic enterprise access switch? You guys are really crazy.

"Gustav Ulander" :

Yea lets make the customers job harder partly by not supporting old
features in the next incarnation of the platform (think VRF support) . Also
lets not make them work the same so that the customer must relearn how to
configure them.
Excellent way of actually pushing the customer to also look at other
platforms...

//Gustav

-Ursprungligt meddelande-
Från: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] För William
Skickat: den 20 september 2017 21:10
Till: Chris Morrow 
Kopia: juniper-nsp@puck.nether.net
Ämne: Re: [j-nsp] Moving onto EX2300

Thanks to all the replies so far!

Regarding a VC Licence -
https://www.juniper.net/documentation/en_US/junos/topics/
concept/ex-series-software-licenses-overview.html#jd0e59

Here are the features which require a EFL:

Bidirectional Forwarding Detection (BFD) IGMP (Internet Group Management
Protocol) version 1 (IGMPv1), IGMPv2, and
IGMPv3
IPv6 routing protocols: Multicast Listener Discovery version 1 and 2 (MLD
v1/v2), OSPFv3, PIM multicast, VRRPv6 Multicast Source Discovery protocol
(MSDP) OSPF v2/v3 Protocol Independent Multicast (PIM) dense mode, PIM
source-specific mode, PIM sparse mode Real-time performance monitoring
(RPM) RIPng (RIPng is for RIP IPv6) Virtual Router Redundancy Protocol
(VRRP)

Which seems straight forward, but they have this bit at the end -
Note: You require a paper license to create an EX2300 Virtual Chassis. If
you do not have a paper license, contact Customer Support.


And looking at the EX2300 packing list:

* Paper license for Virtual Chassis (only for the models EX2300-C-12T-VC,
EX2300-C-12P-VC, EX2300-24T-VC, EX2300-24P-VC, EX2300-48T-VC, and
EX2300-48P-VC)

So it appears I may need to ensure i'm ordering EX2300-48P-VC if I want to
stack 'em, I need to check the finer details with Juniper.

Cheers,

William


On 20 September 2017 at 19:18, Chris Morrow  wrote:

> At Wed, 20 Sep 2017 17:03:21 +,
> Raphael Maunier  wrote:
> >
> > Not supported at all.
> >
> > According to a meeting last week, hardware limitation … EX2200 or
> > 3400 but no support of BGP, if bgp is needed EX3300 / 4300
> >
>
> I found the 3400's are painfully different from 3300/3200's.. with
> respect to vlans, trunks and access port assignment into said vlans..
> and actually getting traffic to traverse a trunk port to an access
> port.
>
> this coupled with what seems a requirement to enable an IRB interface
> to attach the management ip address to seems ... wonky.
>
> I don't find the docs online particularly enlightening either :) I
> have a 3300 config, it should 'just work' on a 3400.. I would have
> expected anyway.
>
> also, I don't think you can disable the VC functions in the
> 3400:
> @EX3400-0401> show chassis hardware
> Hardware inventory:
> Item Version  Part number  Serial number Description
> ChassisNX0217020007  EX3400-48T
> Pseudo CB 0
> Routing Engine 0  BUILTIN  BUILTIN   RE-EX3400-48T
> FPC 0REV 14   650-059881   NX0217020007  EX3400-48T
>   CPU BUILTIN  BUILTIN   FPC CPU
>   PIC 0  REV 14   BUILTIN  BUILTIN   48x10/100/1000
> Base-T
>   PIC 1  REV 14   650-059881   NX0217020007  2x40G QSFP
>   PIC 2  REV 14   650-059881   NX0217020007  4x10G SFP/SFP+
> Xcvr 0   REV 01   740-021309   FS40531D0014  SFP+-10G-LR
> Xcvr 1   REV  740-021309   FS40531D0015  SFP+-10G-LR
> Power Supply 0   REV 05   640-060603   1EDV6486028   JPSU-150W-AC-AFO
> Power Supply 1   REV 05   640-060603   1EDV7021509   JPSU-150W-AC-AFO
> Fan Tray 0   Fan Module,
> Airflow Out (AFO)
> Fan Tray 1   Fan Module,
> Airflow Out (AFO)
>
> (port xe-0/2/0 and xe-0/2/1 are what I'd like to disable)
>
> @EX3400-0401> request virtual-chassis vc-port delete pic-slot 2 port 0
> error: interface not a vc-port
> @EX3400-0401> request virtual-chassis vc-port delete pic-slot 2 port 1
> error: interface not a vc-port
>
> of course, possibly they are not vc-ports, and are only acting like
> 3300 vc ports before I diable VC functionality :)
>
> -chris
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list 

Re: [j-nsp] Moving onto EX2300

2017-09-21 Thread Pavel Lunin
Ups, this was supposed to be on-list.

<plu...@gmail.com>:

Well in fact it's not really about "removing a feature".

Both ex2200 and ex3300 were based on marvell pfe (and control plane cpu as
well) while 2300/3400 have broadcom chips inside. If juniper could, they
would be happy to reuse the same code. But no.

And trust me, you don't want them to even try to rewrite all the features
for the new platform at once (hello those who remember the famous ScreenOS
to SRX and E series to MX BNG "rapid" migration stories).

Not sure, this "hardware limitation" is really that hardware that they
won't ever implement it. VRs were added relatively recently to ex2200/3300,
while it had been known as "hardware limitation" for many years.

Anyway, I won't rely on VRs on this kind of switch both because it implies
a clumsy design, and because it is a rarely used feature, poorly tested and
generating little market pressure in case of bugs.

I am not even sure it's really 100% supported on EX2200/3300, I've recently
seen that cross-VR routes just don't work on ex3300, and it seems to be
rather a feature than a bug.

20 сент. 2017 г. 9:37 ПП пользователь "Gustav Ulander" <
gustav.ulan...@telecomputing.se> написал:

I agree it not the best platform but im guessing there are atleast a couple
> of implementations out there that use it for one reason or the other.
>
> Its not so much the feature itself as the hole “lets remove a feature and
> not replace it with something similar” that gets me. It shows a lack of
> commitment to ones customers.
>
> To be honest perhaps Juniper shouldn’t have added VRF support on the 2200s
> at all  and just point to 3300s or SRX line.
>
>
>
> //Gustav
>
>
>
> *Från:* Pavel Lunin [mailto:plu...@gmail.com]
> *Skickat:* den 20 september 2017 21:31
> *Till:* Gustav Ulander <gustav.ulan...@telecomputing.se>
> *Kopia:* juniper-nsp <juniper-nsp@puck.nether.net>; Chris Morrow <
> morr...@ops-netman.net>; William <wil...@gmail.com>
> *Ämne:* Re: [j-nsp] Moving onto EX2300
>
>
>
> VRs on a basic enterprise access switch? You guys are really crazy.
>
>
>
> "Gustav Ulander" <gustav.ulan...@telecomputing.se>:
>
> Yea lets make the customers job harder partly by not supporting old
> features in the next incarnation of the platform (think VRF support) . Also
> lets not make them work the same so that the customer must relearn how to
> configure them.
> Excellent way of actually pushing the customer to also look at other
> platforms...
>
> //Gustav
>
> -Ursprungligt meddelande-
> Från: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] För William
> Skickat: den 20 september 2017 21:10
> Till: Chris Morrow <morr...@ops-netman.net>
> Kopia: juniper-nsp@puck.nether.net
> Ämne: Re: [j-nsp] Moving onto EX2300
>
>
> Thanks to all the replies so far!
>
> Regarding a VC Licence -
> https://www.juniper.net/documentation/en_US/junos/topics/
> concept/ex-series-software-licenses-overview.html#jd0e59
>
> Here are the features which require a EFL:
>
> Bidirectional Forwarding Detection (BFD) IGMP (Internet Group Management
> Protocol) version 1 (IGMPv1), IGMPv2, and
> IGMPv3
> IPv6 routing protocols: Multicast Listener Discovery version 1 and 2 (MLD
> v1/v2), OSPFv3, PIM multicast, VRRPv6 Multicast Source Discovery protocol
> (MSDP) OSPF v2/v3 Protocol Independent Multicast (PIM) dense mode, PIM
> source-specific mode, PIM sparse mode Real-time performance monitoring
> (RPM) RIPng (RIPng is for RIP IPv6) Virtual Router Redundancy Protocol
> (VRRP)
>
> Which seems straight forward, but they have this bit at the end -
> Note: You require a paper license to create an EX2300 Virtual Chassis. If
> you do not have a paper license, contact Customer Support.
>
>
> And looking at the EX2300 packing list:
>
> * Paper license for Virtual Chassis (only for the models EX2300-C-12T-VC,
> EX2300-C-12P-VC, EX2300-24T-VC, EX2300-24P-VC, EX2300-48T-VC, and
> EX2300-48P-VC)
>
> So it appears I may need to ensure i'm ordering EX2300-48P-VC if I want to
> stack 'em, I need to check the finer details with Juniper.
>
> Cheers,
>
> William
>
>
> On 20 September 2017 at 19:18, Chris Morrow <morr...@ops-netman.net>
> wrote:
>
> > At Wed, 20 Sep 2017 17:03:21 +,
> > Raphael Maunier <raph...@zoreole.com> wrote:
> > >
> > > Not supported at all.
> > >
> > > According to a meeting last week, hardware limitation … EX2200 or
> > > 3400 but no support of BGP, if bgp is needed EX3300 / 4300
> > >
> >
> > I found the 3400's are painfully different from 3300/3200's.. with
> > respec

Re: [j-nsp] ACX5048 - 40 gbps ER 40 km optic

2017-10-05 Thread Pavel Lunin
2017-10-05 1:00 GMT+02:00 Aaron Gould :

> Thanks Pavel, Ok here it is.  Does this mean it should work ?
>
>
>
>   Lane 0
>
> Laser bias current:  32.187 mA
>
> Laser output power:  1.135 mW / 0.55 dBm
>
> Laser receiver power  :  0.000 mW / -40.04 dBm
>


As Sebastien mentioned above, this output means that you don't receive any
light. -40 dBm is just a fancy way to say "darkness". At the same time you
see some emitted power, this means you "send" light to the other end.
Check it on both sides: if A doesn't see what B emits or vise-versa, it
normally means you have a problem in the middle. Check your cabling (make
sure that it's not MMF :) NB: as 40GE is technically 4×10GE (wavelength
multiplexed in this case), you have four lanes for each port, this is why
it's called Quad-SFP and ER4.

When you manage to have the light on the other side of your fiber, you
might face another problem. As this is an ER, you can saturate your
receiver if you don't have enough attenuation. So either use an attenuator
or the above mentioned common trick with turning the fiber around your
finger to let the light escape through the cladding (fix it with tape). See
Chuck's comment for more details.

Or just replace the optics with a QSFP-DAC )
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Combining two MS-MIC in MX104 for CGNAT

2017-09-25 Thread Pavel Lunin
Hi Aaron,

Yes, I had a customer with 2× MS-MICs in an MX104 in production. No major
issues with this so far.

They use nor ams neither rsp, just old-good per source IP FBF with bit
masks like this:

from source address 10.0.0.0/255.0.0.1 then routing-instance CGN-1 /*
10.x.x.a, a is even */
from source address 10.0.0.1/255.0.0.1 then routing-instance CGN-2 /*
10.x.x.b, b is odd */

With manual pools partitioning and some additional route leaking tricks to
make it redundant.

A major concern for CGN is that you want all sessions from the same source
IP to be nated to the same external address. Otherwise your support will
die under tons of "why my passive mode FTP/PPTP/IPsec don't work?"  This is
why basic ECMP is normally not the best option. No idea how ASM deals with
this, I think it should, but just be aware.

Pavel


2017-09-13 19:44 GMT+02:00 Aaron Gould :

> Has anyone tried this combing two MS-MIC-16G cards to accomplish higher
> CGNAT throughput ?
>
> -Aaron
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] QFX 5100 can you mix vlan-ccc + vlan-bridge on the same interface with 14.1X53-D43.7

2017-09-28 Thread Pavel Lunin
I really doubt that it's supported on QFX5100. Not 100% sure though.

But what's your problem? It does not work in this combination or does not
work at all?

IIRC, ex4600/qfx5100 do not support control word on pseudowires as well
(like ex4500/4550). So if you have something like an MX on the other side,
Martini signaling comes up but you see no traffic, try no-control-word.

Kind regards,
Pavel

28 сент. 2017 г. 4:12 ПП пользователь "Alain Hebert" 
написал:

> Been crashing hard on google this morning...  Cannot find any hint of
> limitations on the QFX platform for this case.
>
> Sample config
>
> flexible-vlan-tagging;
> mtu 9216;
> encapsulation flexible-ethernet-services;
> unit 111 {
> encapsulation vlan-ccc;
> vlan-id 111;
> input-vlan-map pop;
> output-vlan-map push;
>
> }
> unit 222 {
> encapsulation vlan-ccc;
> vlan-id 222;
> input-vlan-map pop;
> output-vlan-map push;
>
> }
> unit 333 {
> encapsulation vlan-bridge
> vlan-id 333;
> }
>
> Just no input / output with:
>
> Model: qfx5100-48s-6q
> Junos: 14.1X53-D43.7
>
> But it works in our lab
>
> Model: qfx5100-48s-6q
> Junos: 17.2R1.13
>
> --
> -
> Alain Hebertaheb...@pubnix.net
> PubNIX Inc.
> 50 boul. St-Charles
> 
> P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
> Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] logical system in production - MX960

2017-10-03 Thread Pavel Lunin
Strongly agree with Ytti. Most LSYS deployments I've seen were really
"deploy an LSYS to deploy an LSYS" kind of of thing.

Some more or less viable use-case I know is BGP scaling: dedicated rpd in
an LSYS for an off-path RR. It creates a dedicated 32-bit rpd, which can
consume another 4GB of RAM. But nowadays, with all the kinds of VM-based
routers/RRs, 64-bit rpd, node slicing and stuff, this approach looks to be
obsolete.



2017-10-03 10:25 GMT+02:00 Saku Ytti :

> On 25 September 2017 at 15:10, Aaron Gould  wrote:
>
>
> Hey,
>
> > Do you all use logical systems in your production environment ?
>
> No. Be sure you're solving real technical problems, not political or
> process. I know there are use-cases, but I suspect there are fewer
> use-cases than deployments.
>
> There are several layers of function multiplexting
>
> virtual-router: essentially just VRF
> logical-system: separate rpd running in same control-plane
> node slicing: separate freebsd KVM running on single hypervisor
>
> Since node slicing, I think logical-systems are rather useless.
>
> https://www.juniper.net/documentation/en_US/junos/
> topics/concept/node-slicing-overview.html
>
> --
>   ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] vlan-ccc between ACX5048 and EX4500

2017-08-23 Thread Pavel Lunin
Yes, EX4500/4550 chipset is not able to treat any additional tags of the
MPLS packet.

IIRC, they "officially support" (whatever it means) pseudowires and VPLS in
EX4550 to EX4550 mode only thus it's nowhere documented that it pops the
vlan tag by default and can't treat the control world.

23 авг. 2017 г. 11:16 PM пользователь "Jay Hanke" <jayha...@gmail.com>
написал:

> Setting the control word worked right away. Is the issue lack of
> support of the control word on one of the sides?
>
> On Wed, Aug 23, 2017 at 8:49 AM, Pavel Lunin <plu...@gmail.com> wrote:
> >
> >
> > Try to add no-control-world under your l2circuit/neighbor/interface
> config
> > on both sides (as well the VLAN push/pop on the ACX side).
> >
> > 2017-08-23 15:16 GMT+02:00 Jay Hanke <jayha...@gmail.com>:
> >>
> >> No luck. Here is the config I tried
> >>
> >> unit 517 {
> >> encapsulation vlan-ccc;
> >> vlan-id 517;
> >> input-vlan-map pop;
> >> output-vlan-map push;
> >> }
> >>
> >>
> >>
> >> On Wed, Aug 23, 2017 at 6:38 AM, Enoch Nyatoti <enyat...@yahoo.com>
> wrote:
> >> > Hi,
> >> >
> >> > Can you try the following on your ACX5048 interface:
> >> >
> >> > set interfaces xe-0/0/46.517 input-vlan-map pop
> >> > set interfaces xe-0/0/46.517 output-vlan-map push
> >> >
> >> > Best regards
> >> > Enoch Nyatoti
> >> >
> >> >
> >> >
> >> >
> >> > On Wednesday, August 23, 2017, 7:51:21 AM GMT+1, Jay Hanke
> >> > <jayha...@gmail.com> wrote:
> >> >
> >> >
> >> > I'm having an issue where I'm unable to pass traffic through a
> >> > vlan-ccc between an ACX5048 and an EX4500. Other circuits on same
> >> > boxes are working fine. The other circuits are ACX to ACX and EX to
> >> > EX.
> >> >
> >> > PE1 (ex4500)
> >> >
> >> >
> >> > l2circuit {
> >> > neighbor 10.200.xx.xy {
> >> > interface xe-0/0/1.517 {
> >> > virtual-circuit-id 1171;
> >> > mtu 9000;
> >> > encapsulation-type ethernet-vlan;
> >> > }
> >> > }
> >> > }
> >> >
> >> > xe-0/0/1 {
> >> > vlan-tagging;
> >> > mtu 9216;
> >> > encapsulation vlan-ccc;
> >> > unit 517 {
> >> > encapsulation vlan-ccc;
> >> > vlan-id 517;
> >> > }
> >> > }
> >> >
> >> > PE2 (ACX5048):
> >> >
> >> > l2circuit {
> >> > neighbor 10.200.xx.xx {
> >> > interface xe-0/0/46.517 {
> >> > virtual-circuit-id 1171;
> >> > mtu 9000;
> >> > encapsulation-type ethernet-vlan;
> >> > }
> >> > }
> >> > }
> >> >
> >> > xe-0/0/46 {
> >> >
> >> > vlan-tagging;
> >> > mtu 9216;
> >> > encapsulation flexible-ethernet-services;
> >> >
> >> > unit 517 {
> >> > encapsulation vlan-ccc;
> >> > vlan-id 517;
> >> > }
> >> > }
> >> >
> >> > I'm learning mac-addresses through the link on the PE2 side.
> >> >
> >> > The interface connected on the PE1 side doesn't learn anything.
> >> >
> >> > show l2circuit con shows the PW up on both PE devices.
> >> >
> >> > I've reloaded software on the EX4500 with the same result. There is
> >> > one P router between the PE devices.
> >> >
> >> > All my existing PW are between EX4500 or between ACX. We don't have
> >> > any working between systems with different models acting as PE
> >> > devices. But quite a few working where PE devices are the same type.
> >> >
> >> > Any ideas?
> >> >
> >> > Jay
> >> > ___
> >> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> >
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] ACX5048 - 40 gbps ER 40 km optic

2017-10-04 Thread Pavel Lunin
The command you are looking for is called "show interfaces diagnostics
optics".

2017-10-04 21:20 GMT+02:00 Aaron Gould :

> I put this into my ACX5048 in the lab today. actually 2 of them.
> QSFP+-40G-ER4 (aka QSFPP-40GBASE-ER4) SMF 40 km 1310 nm optic
>
>
>
> Optic is seen in cli, but I see no led's on chassis and when I plug them
> into each other, interfaces stay up/down.
>
>
>
> Is there something I need to do to make this 40 gig optic work in the
> ACX5048 (JUNOS 15.1X54-D61.6) ?
>
>
>
> {master:0}
> agould@eng-lab-5048-2> show interfaces terse | grep et-
> et-0/0/48   updown
> et-0/0/49   updown
>
> {master:0}
> agould@eng-lab-5048-2> show interfaces et-0/0/48
> Physical interface: et-0/0/48, Enabled, Physical link is Down
>   Interface index: 657, SNMP ifIndex: 582
>   Link-level type: Ethernet, MTU: 1514, LAN-PHY mode, Speed: 40Gbps, BPDU
> Error: None, MAC-REWRITE Error: None, Loopback: Disabled,
>   Source filtering: Disabled, Flow control: Disabled, Media type: Fiber
>   Device flags   : Present Running Down
>   Interface flags: Hardware-Down SNMP-Traps Internal: 0x4000
>   Link flags : None
>   CoS queues : 8 supported, 8 maximum usable queues
>   Current address: 20:4e:71:45:d3:d3, Hardware address: 20:4e:71:45:d3:d3
>   Last flapped   : 2017-10-04 10:26:33 CDT (00:09:14 ago)
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>   Active alarms  : LINK
>   Active defects : LINK
>   Interface transmit statistics: Disabled
>
> {master:0}
> agould@eng-lab-5048-2> show interfaces et-0/0/49
> Physical interface: et-0/0/49, Enabled, Physical link is Down
>   Interface index: 658, SNMP ifIndex: 583
>   Link-level type: Ethernet, MTU: 1514, LAN-PHY mode, Speed: 40Gbps, BPDU
> Error: None, MAC-REWRITE Error: None, Loopback: Disabled,
>   Source filtering: Disabled, Flow control: Disabled, Media type: Fiber
>   Device flags   : Present Running Down
>   Interface flags: Hardware-Down SNMP-Traps Internal: 0x4000
>   Link flags : None
>   CoS queues : 8 supported, 8 maximum usable queues
>   Current address: 20:4e:71:45:d3:d7, Hardware address: 20:4e:71:45:d3:d7
>   Last flapped   : 2017-10-04 10:27:53 CDT (00:07:56 ago)
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>   Active alarms  : LINK
>   Active defects : LINK
>   Interface transmit statistics: Disabled
>
> {master:0}
> agould@eng-lab-5048-2> show chassis hardware
> Hardware inventory:
> Item Version  Part number  Serial number Description
> Chassis(removed...)  ACX5048
> Pseudo CB 0
> Routing Engine 0  BUILTIN  BUILTIN   ACX5K Routing
> Engine
> FPC 0REV 07   650-060546   (removed...)  ACX5048
>   CPU BUILTIN  BUILTIN   FPC CPU
>   PIC 0   BUILTIN  BUILTIN   48x10G-6x40G
>
> Xcvr 48  REV 01   740-059185   (removed...)  QSFP+-40G-ER4
> Xcvr 49  REV 01   740-059185   (removed...)  QSFP+-40G-ER4
> Power Supply 0   REV 04   740-043886   (removed...)  JPSU-650W-DC-AFO
> Power Supply 1   REV 04   740-043886   (removed...)  JPSU-650W-DC-AFO
> Fan Tray 0   ACX5K Fan Tray 0,
> Front to Back Airflow - AFO
> Fan Tray 1   ACX5K Fan Tray 1,
> Front to Back Airflow - AFO
> Fan Tray 2   ACX5K Fan Tray 2,
> Front to Back Airflow - AFO
> Fan Tray 3   ACX5K Fan Tray 3,
> Front to Back Airflow - AFO
> Fan Tray 4   ACX5K Fan Tray 4,
> Front to Back Airflow - AFO
>
>
>
>
> {master:0}
> agould@eng-lab-5048-2> show chassis pic pic-slot 0 fpc-slot 0
> FPC slot 0, PIC slot 0 information:
>   Type 48x10G-6x40G
>   StateOnline
>   PIC version  2.7
>   Uptime 99 days, 23 hours, 26 minutes, 3 seconds
>
> PIC port information:
>  FiberXcvr vendor   Wave-
> Xcvr
>   Port Cable typetype  Xcvr vendorpart number   length
> Firmware
>   48   40GBASE ER4   SMJUNIPER-FINISARFTL4E1QE1C-J3 1301 nm
> 0.0
>   49   40GBASE ER4   SMJUNIPER-FINISARFTL4E1QE1C-J3 1301 nm
> 0.0
>
>
>
> - Aaron
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Enhanced MX480 Midplane?

2017-11-14 Thread Pavel Lunin
Is there anyone who can give more details about the enhanced midplane?
>

It's just more lines (wires) to provide more fabric bandwidth per-slot/PFE.
And maybe more power as well, I am not sure though.

It's been shipping for ages (since 2012 or something like). It had been
supposed to be required even for SCBE and MPC3 non-NG but then Juniper
thought twice and finally it was not really necessarily until MPC5. See
Sebastian's comment for what it it gives in terms of bandwidth per slot.

So, in theory, all MX240/480/960 have been shipped with enhanced midplane
for many years, if only your partner/SE, or whoever makes the specs, does
his job well or you don't specifically ask for the non-enhanced version for
some reason (if it's still not EOL, I don't remember).

You can check whether you have it with the following command:

user@mx480> show chassis hardware | match Midplane
Midplane REV 04   750-047862     Enhanced MX480
Midplane

In the specs this is referred as MX480-BASE3 or -PREMIUM3 in contrast to
-BASE or -PREMIUM for non-enhanced midplane. You normally don't buy a
chassis on its own, so you don't see part codes like CHAS-BP3-MX480-S in
your BOMs.

AFAIR, you can also buy the new midplane as an FRU and change it yourself
though it requires to get all the cards out of the box.

--
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Pavel Lunin
Worth to note that XL-based PFEs have much more SRAM and are capable to
hold like ~10M IPv4 LPM records in the FIB in contrast to ~2.4M for LU
(I've never reached the limit, so these numbers are rather what I've read /
been told here and there). And, of course, if you have both LU and XL-based
cards in the same box, your FIB is limited by the "lowest common
denominator".

2017-11-01 17:07 GMT+01:00 Pavel Lunin <plu...@gmail.com>:

>
>
> There were two versions of MPC3:
>
> 1. MPC3 non-NG, which has a single XM buffer manager and four LU chips
> (the old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
> 2. MPC3-NG which is based on exactly the same chipset as MPC5, based on
> XM+XL.
>
> MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a
> new more "performant" microcode.
>
> XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also
> multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4
> these cores have a shared memory, so they don't suffer from some
> limitations (like not very precise policers) which you can face with
> multi-LU PFE architectures.
>
> MPC7 has a completely new single core 400G chip (also present in the
> recently announced MX204 and MX10003).
>
> This said, I find MPC4 quite not bad in most scenarios. Never had any
> issues, specific to its architecture.
>
> P. S. Finally this choice is all about money/performance.
>
>
> Kind regards,
> Pavel
>
>
> 2017-11-01 16:46 GMT+01:00 Scott Harvanek <scott.harva...@login.com>:
>
>> Adam,
>>
>> I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and
>> XQ chips?  Just the MPC5E has two XM chips.
>>
>> Scott H
>>
>>
>>
>> > On Nov 1, 2017, at 10:28 AM, <adamv0...@netconsultings.com> <
>> adamv0...@netconsultings.com> wrote:
>> >
>> >> Scott Harvanek
>> >> Sent: Tuesday, October 31, 2017 6:57 PM
>> >>
>> >> Hey folks,
>> >>
>> >> We have some MX480s we need to add queuing capable 10G/40G ports to
>> >> and it looks like MPC5EQ-40G10G is going to be our most cost effective
>> >> solution.  Has anyone run into any limitations with these MPCs that
>> aren’t
>> >> clearly documented?
>> >>
>> >> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently
>> we’re
>> >> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
>> >> do the same on this along with the adding of the 40G ports? Any Layer3
>> >> limitations or the normal 2MM/6MM FIB/RIB?
>> >>
>> > Hey Scott,
>> > I'd rather go with a standard Trio architecture i.e. one lookup block
>> one buffering block (and one queuing block) -so mpc3 or mpc7.
>> > To me it seems like 4 and 5 are just experiments with the Trio
>> architecture that did not stood the test of time.
>> >
>> > adam
>> >
>> >
>>
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Pavel Lunin
There were two versions of MPC3:

1. MPC3 non-NG, which has a single XM buffer manager and four LU chips (the
old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
2. MPC3-NG which is based on exactly the same chipset as MPC5, based on
XM+XL.

MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a
new more "performant" microcode.

XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also
multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4
these cores have a shared memory, so they don't suffer from some
limitations (like not very precise policers) which you can face with
multi-LU PFE architectures.

MPC7 has a completely new single core 400G chip (also present in the
recently announced MX204 and MX10003).

This said, I find MPC4 quite not bad in most scenarios. Never had any
issues, specific to its architecture.

P. S. Finally this choice is all about money/performance.


Kind regards,
Pavel


2017-11-01 16:46 GMT+01:00 Scott Harvanek :

> Adam,
>
> I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and
> XQ chips?  Just the MPC5E has two XM chips.
>
> Scott H
>
>
>
> > On Nov 1, 2017, at 10:28 AM,  <
> adamv0...@netconsultings.com> wrote:
> >
> >> Scott Harvanek
> >> Sent: Tuesday, October 31, 2017 6:57 PM
> >>
> >> Hey folks,
> >>
> >> We have some MX480s we need to add queuing capable 10G/40G ports to
> >> and it looks like MPC5EQ-40G10G is going to be our most cost effective
> >> solution.  Has anyone run into any limitations with these MPCs that
> aren’t
> >> clearly documented?
> >>
> >> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently
> we’re
> >> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
> >> do the same on this along with the adding of the 40G ports? Any Layer3
> >> limitations or the normal 2MM/6MM FIB/RIB?
> >>
> > Hey Scott,
> > I'd rather go with a standard Trio architecture i.e. one lookup block
> one buffering block (and one queuing block) -so mpc3 or mpc7.
> > To me it seems like 4 and 5 are just experiments with the Trio
> architecture that did not stood the test of time.
> >
> > adam
> >
> >
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Enhanced MX480 Midplane?

2017-12-11 Thread Pavel Lunin
Is this true about MX960?  Does it have a midplane also ?
>
>
Yes though particular numbers might be different. MX960 has less backplane
capacity per slot / pfe than MX480.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best practice for igp/bgp metrics

2017-10-25 Thread Pavel Lunin
Reference bandwidth might however be useful for lags, when you may want to
lower the cost of a link if some members go down (though I prefer ECMP in
the core for most cases).

And you can combine the role/latency approach with automatic reference
bandwidth-based cost, if you configure 'bandwidth' parameter on the
interfaces instead of IGP cost.

25 окт. 2017 г. 10:08 ПП пользователь "Saku Ytti"  написал:

> Hey,
>
> This only matters if you are letting system assign metric
> automatically based on bandwidth. Whole notion of preferring
> interfaces with most bandwidth is fundamentally broken. If you are
> using this design, you might as well assign same number to every
> interface and use strict hop count.
>
> On 25 October 2017 at 22:41, Luis Balbinot  wrote:
> > Never underestimate your reference-bandwidth!
> >
> > We recently set all our routers to 1000g (1 Tbps) and it was not a
> > trivial task. And now I feel like I'm going to regret that in a couple
> > years. Even if you work with smaller circuits, having larger numbers
> > will give you more range to play around.
> >
> > Luis
> >
> > On Tue, Oct 24, 2017 at 8:50 AM, Alexander Dube 
> wrote:
> >> Hello,
> >>
> >> we're redesigning our backbone with multiple datacenters and pops
> currently and looking for a best practice or a recommendation for
> configuring the metrics.
> >> What we have for now is a full meshed backbone with underlaying isis.
> IBGP exports routes without any metric. LSP are in loose mode and are using
> isis metric for path calculation.
> >>
> >> Do you have a recommendation for metrics/te ( isis and bgp ) to have
> some values like path lengh ( kilometers ), bandwidth, maybe latency, etc
> inside of path calculation?
> >>
> >> Kind regards
> >> Alex
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
>   ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] More power questions

2018-05-12 Thread Pavel Lunin
> The NEMA 6-20 plug is very uncommon in a datacenter environment. I
> don't know why Juniper sells those cords.
>

It's not Juniper, it's the reseller/partner who quotes the spec. For the MX
series you don't have any default per-country power cord option, which is
the case for the EX, branch SRX etc.

MX240/480/960 PSU has a standard C20 connector and than it's upto the
customer to choose the plug type (list priced like $75) or not buy any.

As most customers don't care of cables when placing orders for several tens
or hundreds of thousands, resellers usually put something at their own
choice, which often goes to the trash bin later.

--
Kind regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Managing large route-filter-lists

2018-05-22 Thread Pavel Lunin
Hi list,

Anyone knows if this "ephemeral configuration" thing is just a new fancy
hipster-ish name of the dynamic database feature, which has been in JUNOS
since 9.x and never really been widely used in production by normal people?

--
Kind regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best practice for igp/bgp metrics

2017-10-26 Thread Pavel Lunin
Well, in fact I rather meant a different way of setting the role based
costs rather than treating real link bandwidth as 1/cost.

>For LAG you should set minimum links to a number which allows you to
>carry traffic you need.

OK, good point. Though I am not sure that all vendors allow to do it this
way.

Basically I agree with your concept, but it's worth to note that you assume
the current traffic as "given" and that links can't be saturated. This
assumption only holds when there is a great number of short-lived low
bandwidth sessions, like residential broadband subscribers traffic, and
enough bandwidth to accommodate all the traffic. In this case, a saturated
link and consequent drops is equivalent to "no service". However in some
environments (many enterprise networks, some DCs) hight traffic periods are
observed during backups/replications/etc. This normally implies relatively
low number of high bandwidth long-lasting TCP flows. Being forwarded
through a saturated link, TCP slows down and traffic "adapts" to network
conditions. Your backup will take 4 hours instead of 2. This is where real
bandwidth might be a meaningful metric. If I understand correctly, it was
supposed to work like this in the era when you always had more traffic than
available bandwidth.



25 окт. 2017 г. 10:50 ПП пользователь "Saku Ytti" <s...@ytti.fi> написал:

> I disagree. Either traffic fits or it does not fit on SPT path,
> bandwidth is irrelevant.
>
> For LAG you should set minimum links to a number which allows you to
> carry traffic you need.
>
> Ideally you have capacity redundancy on SPT level, if best path goes
> down, you know redundant path, and you know it can carry 100% of
> demand. If this is something you cannot commercially promise, you need
> strategic TE to move traffic on next-best-path when SPT is full.
>
>
> On 25 October 2017 at 23:25, Pavel Lunin <plu...@gmail.com> wrote:
> > Reference bandwidth might however be useful for lags, when you may want
> to
> > lower the cost of a link if some members go down (though I prefer ECMP in
> > the core for most cases).
> >
> > And you can combine the role/latency approach with automatic reference
> > bandwidth-based cost, if you configure 'bandwidth' parameter on the
> > interfaces instead of IGP cost.
> >
> > 25 окт. 2017 г. 10:08 ПП пользователь "Saku Ytti" <s...@ytti.fi>
> написал:
> >
> >> Hey,
> >>
> >> This only matters if you are letting system assign metric
> >> automatically based on bandwidth. Whole notion of preferring
> >> interfaces with most bandwidth is fundamentally broken. If you are
> >> using this design, you might as well assign same number to every
> >> interface and use strict hop count.
> >>
> >> On 25 October 2017 at 22:41, Luis Balbinot <l...@luisbalbinot.com>
> wrote:
> >> > Never underestimate your reference-bandwidth!
> >> >
> >> > We recently set all our routers to 1000g (1 Tbps) and it was not a
> >> > trivial task. And now I feel like I'm going to regret that in a couple
> >> > years. Even if you work with smaller circuits, having larger numbers
> >> > will give you more range to play around.
> >> >
> >> > Luis
> >> >
> >> > On Tue, Oct 24, 2017 at 8:50 AM, Alexander Dube <n...@layerwerks.net>
> >> > wrote:
> >> >> Hello,
> >> >>
> >> >> we're redesigning our backbone with multiple datacenters and pops
> >> >> currently and looking for a best practice or a recommendation for
> >> >> configuring the metrics.
> >> >> What we have for now is a full meshed backbone with underlaying isis.
> >> >> IBGP exports routes without any metric. LSP are in loose mode and
> are using
> >> >> isis metric for path calculation.
> >> >>
> >> >> Do you have a recommendation for metrics/te ( isis and bgp ) to have
> >> >> some values like path lengh ( kilometers ), bandwidth, maybe
> latency, etc
> >> >> inside of path calculation?
> >> >>
> >> >> Kind regards
> >> >> Alex
> >> >> ___
> >> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >> > ___
> >> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >>
> >>
> >>
> >> --
> >>   ++ytti
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
>   ++ytti
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] l2circuit down one side/up another

2017-10-20 Thread Pavel Lunin
No TE, no problem. Extreme knows how to do it reeeally wrong :) At least,
last time I checked.

But yes, at the basic level it works. Adjacency, LSAs etc.

20 окт. 2017 г. 6:39 ПП пользователь  написал:

>  I had a hard time passing IS-IS thru Extreme Switch, give up after
> 5m and used OSPF for that segment of our Pre-Prod Lab.

There's nothing magic about IS-IS. We've had IS-IS running through
Extreme switches many times (still have one such adjacency in our lab,
running through one X460 and one X480). No problem.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Unequal bandwidth on virtual chassis ports?

2017-10-26 Thread Pavel Lunin
In fact, the ring is "required" for each type of links: 32/40g stacking
ports, xe and ge separately.

This comes from the underlying ISIS LFA aka VC fast reroute feature. Two
alternative routes are installed into the FIB for each destination, if one
link fails, the corresponding route is withdrawn quickly and the
alternative one is already there, so you don't have to wait while SFP
reconverges. This works only for the same type of links in an LFA pair.

IIRC, VC fast reroute is enabled by default for 40/32G links, for xe and ge
you need to enable it manually.

If you don't have a ring for a given type of VCP links, VC will work
without fast reroute, you'll just have a bit longer outage time in case of
a failure (need to wait until ISIS detects the failure and reconverges).

Regards,
Pavel

26 окт. 2017 г. 11:17 ПП пользователь "Chris Kawchuk" 
написал:

> As VC uses IS-IS as it's underlying protocol (last time I checked), I
> believe there is a metric associated with each VC link. show
> virtual-chassis adjacency/database/etc.. should show those metics.
>
> VC IS-IS will calculate the lowest-metric to the far-end PFE, and use
> that. I also recall that it counts packet-forwarding-engines inside the
> switch itself, and not switches per se as a hop/link count. For example, an
> EX4200 has 3 x PFEs inside the box, and depending on where/how you connect
> the back-side VC cables or front-side revenue ports as VC will affect how
> it sees the topology.
>
> VC will not load balance across unequal-costs (a-la RSVP-TE or something),
> and doesn't use multiple paths (even if they're ECMP) to the destination
> either last time I played with it; but it's been a while ;)
>
> VCF will do the ECMP-trick, BTW.
>
> - CK.
>
>
> On 27 Oct 2017, at 8:05 am, Jonathan Call  wrote:
>
> > Typically when I build virtual chassis I set up the recommended "ring"
> topology and give path an equal amount of bandwidth. Would there be any
> technical problems if I give one of the virtual chassis links more
> bandwidth than the others?
> >
> >
> > The Virtual Chassis Feature Guide for the QFX Series doesn't suggest
> there is anything wrong with this, but it doesn't really discuss the
> scenario either.
> >
> >
> >
> > Jonathan
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Understanding limitations of various MX104 bundles

2018-01-05 Thread Pavel Lunin
Yes, it's generally like that with all the absurd licenses on the low-end
> MX. Look at the price of an MX80, then look at the price of an MX5,
> upgraded gradually to an MX80 over time. Pay as you grow, indeed. It's
> actually cheaper to buy multiple other vendor routers entirely than to pay
> the exhorbitant license fees to upgrade one MX5 to support either the 2nd
> MIC slot or any one of the built-in 10G ports.
>
> That said, we have found that original MX80-5, which I believe was the
> original part number prior to Juniper shipping actual MX5 "board branded"
> routers, have no restrictions. Not that we would take advantage of this,
> but it's good information to have in the event of a capacity emergency.
>
>
Yes, originally they were shipping MX80-5G/10G/40G which were pure MX80 not
limited at all. Than you had MX5/10/40, those were actually enforced. MX104
variants like MX104-MX5 are also enforced. Worth to note that the MX104 10G
port licenses have nothing to do with those bundle licenses. Worth to note
that the whole MX104 licensing story is a complete mess. Even the names
like MX104-MX5 are crazy. There are also some tricks but, well, technically
it's enforced and you need a license.

And yes, it's not pay as you grow at all. It's an intentional marketing
barrier. I've never seen a customer who upgraded those bundles, as the
licenses are intentionally overpriced.

And yes, today there is almost no reason to buy an MX104, let alone
MX5/10/40/80. MX204 and MX150 aka vMX-as-an-appliance are your friends.
Only gotcha (apart the currently bleeding edge JUNOS) is that MX204 will
probably never support stateful services.

--
Regards,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Experience with MX10003

2018-01-25 Thread Pavel Lunin
Not that brand new. MPC7-like, which has been around for quite some time.

What should be completely new in this box is the fabric.

The main gotcha is that they are still not shipping it. Same for mx204.
Respectively no software is available publicly for these platforms yet.

It means that you'll get something close to production-ready by the end of
Q3 2018.

It's quite normal though. Add a year to the announcement date, you'll get
the date when you can (try to) put it in production. Well, maybe.

Regards,
Pavel

25 янв. 2018 г. 6:16 ПП пользователь "Alexander Marhold" <
alexander.marh...@gmx.at> написал:

> Hi
> Regarding same chipset as mx960:
>
> RE yes x86
> PFE a clear no NO  it uses a "BRAND NEW" 3rd generation TRIO chipset with
> 400G throughput  also built into the MX204
>
> Grabbed info from a BDM document in PPT describing both new platforms
> Regards
>
> alexander
>
> -Ursprüngliche Nachricht-
> Von: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] Im Auftrag
> von Alain Hebert
> Gesendet: Donnerstag, 25. Januar 2018 17:47
> An: Juniper List
> Betreff: [j-nsp] Experience with MX10003
>
>  Hi,
>
>  After the bad experience with the QFX5100, now our rep is pushing for
> MX10003 instead of MX960.
>
>  While its half the routing (10T versus 4T), at 1/2 the price, and a
> barely 3U in space, for the same chipset (coming from the sales guy).
>
>  Anything ring thru?  Or we're going to be just another bunch of crash
> test dummies for Juniper to test this new platform?
>
>  Thanks for your time.
>
> --
> -
> Alain Hebertaheb...@pubnix.net
> PubNIX Inc.
> 50 boul. St-Charles
> P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
> Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Prefix independent convergence and FIB backup path

2018-02-08 Thread Pavel Lunin
Here (VPNv4/v6, BGP PIC Core + PIC Edge, no addpath as not supported in vpn
> AFI) we can see that, when possible:
>


No need for add-path in a VRF in fact. You always had it "for free" with
per PE route-distinguishers.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX4200 virtual chassis problem, master going into linecard mode

2018-07-26 Thread Pavel Lunin
--> so then in a 2node VC one node is Master one node is backup
> If they split the master will go down but the backup should survive as it
> is
> still half of the original cluster
>
> So this means you should make the part you want to survive to be the
> backup-RE and not the master-RE
>
> --- or did I miss something ?!
>
>

My philosophy is that a default use case of a two nodes VC (and nearly only
use case of a VC at all) should be some LAG-based redundancy, when two
switches are racked next to each other, connected with two twinax cables
and should never split. Of course, technically they can, but a lot of other
bloody things, which are out of our control, can happen to them: software
bug, misconfiguration, uncontrolled hardware failure like bite errors cased
by an overheated SFP, drunk worker with an angle grinder etc.

All the exotic cases like geographically distributed VCs etc are, in my
opinion, an exercise for the folks who can't figure out what routing
protocols are made for. So this should not be the default use case, and the
default software behavior should not be adapted to such scenarios.

Thus in a two-nodes VC, a *real* failure scenario is a switch failure. When
split-detection is enabled, two-nodes VC will only survive in 50% of switch
failure cases (if backup RE dies, VC dies, if master RE dies, VC survives).
So in fact it's just no better than a signle non-redundant switch. It's
worse, in fact, as the added complexity and false expectations are, in
fact, more expensive currency than a controlled service outage.

Cheers,
Pavel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX4200 virtual chassis problem, master going into linecard mode

2018-07-25 Thread Pavel Lunin
> > in a virtual chassis you could add:
> >
> > set virtual-chassis no-split-detection
> >
> > This will ensure that if both VC ports go down, the master routing
> engine carries on working.
>
> Are you referring to "Scenario B" in
> https://kb.juniper.net/InfoCenter/index?page=content=KB13879 ?
> or a different case?
>


I don't like to explain what others say but I think yes. It's been known
behavior since always: in a two-member VC always disable split-detection.
You can google for other threads on this in this list.

It's always been kind of poorly documented. Last time I checked the docs,
instead of just writing clearly that it must be disabled in two-members
mode, they "don't recommend" it with some kind of hand-waving explanation
that if you estimate that the backup RE failure probability is higher that
a split-brain condition blah-blah-blah... Just disable split-detection,
that's it :)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


<    1   2   3   >