[j-nsp] MX204 vs. MX240??
I wanted to resurrect an old thread about the MX204, from a year and a half ago: https://lists.gt.net/nsp/juniper/64290 My understanding is that the MX204 is a 1 RU MPC7, but with a few modifications. I understand that the eight 10Gig ports have been modified to allow for 1 Gig transceivers as well, and perhaps that the QSFP ports can accommodate a pigtail for providing a bunch of 1 Gig connections, if necessary. The 10/40/100 capabilities of the MPC7 look great, but there are few isolated cases where I need to support legacy 1 gig, and the MX204 can now handle that. Is this true? Also, I understand that the MX204 CPU and other resources are a vast improvement over the MX80, and that the MX204 can handle multiple full Internet route BGP feeds, just as well as the MX240 REs can, without compromise in performance. The newer VM support inside the RE makes the requirements for an additional RE less important now, according to my understanding. So, if you do not need a lot of speeds and feeds, and can live without a physical backup RE, the MX204 would be a good alternative to a MX240. Have I made accurate assumptions?? Clarke Morledge Network Engineering Information Technology Jones Hall (Room 18) 200 Ukrop Way Williamsburg VA 23187 William & Mary ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] ARP resolution algorithm? Storage of MX transit packets?
Thank you for the input thus far, folks. Let me explain just a bit more about what I am dealing with. Because we get so much garbage scanning, if the scanner tries to hit an IP address, that does not have an ARP resolution, it really clutters up traffic unnecessarily. A simple case from my lab illustrates some of the difficulty. Here I am sending a single transit packet, passing through my MX, destined to an IP that will not resolve. Since the MX has nowhere immediately to send the packet, the RE spits out a bunch of ARP requests: 17:30:35.095861 ARP, Request who-has 192.168.10.21 tell 192.168.10.8, length 46 17:30:35.713821 ARP, Request who-has 192.168.10.21 tell 192.168.10.8, length 46 17:30:36.613849 ARP, Request who-has 192.168.10.21 tell 192.168.10.8, length 46 17:30:37.513831 ARP, Request who-has 192.168.10.21 tell 192.168.10.8, length 46 17:30:38.313831 ARP, Request who-has 192.168.10.21 tell 192.168.10.8, length 46 Correspondingly, rtsockmon -tn logs this: [17:30:35:099.939] kernel Proute add inet 192.168.10.21 tid=36 plen=32 type=dest flags=0x180 nh=hold nhflags=0x1 nhidx=1290 rt_nhiflist = 0 altfwdnhidx=0 filtidx=0 lr_id = 0 featureid=0 rt_mcast_nhiflist=-1610628420 [17:30:35:101.376] kernel Pnexthopadd inet nh=hold flags=0x1 uflags=0x0 idx=1290 ifidx=421 filteridx=0 lr_id =0 [17:30:39:013.595] kernel Proute delete inet 192.168.10.21 tid=36 plen=32 type=dest flags=0x180 nh=hold nhflags=0x1 nhidx=1290 rt_nhiflist = 0 altfwdnhidx=0 filtidx=0 lr_id = 0 featureid=0 rt_mcast_nhiflist=-1610628420 [17:30:39:013.710] kernel Pnexthopdelete inet nh=hold flags=0x5 uflags=0x0 idx=1290 ifidx=421 filteridx=0 lr_id =0 In a real world case, we have generally observed a fairly even distribution of scanning attempts on non-resolving IPs, across an entire subnet, over time. So, let's say you have an unused class C, being scanned at the rate of 4 IPs per second, such that every address gets scanned once a minute. Assuming that each incoming transit packet kicks off 5 ARP requests (1 initial, plus 4 retries), as I saw above, that would trigger somewhere over 1200 ARP requests a minute, or about 20 ARP packets a second. That is a fairly moderate amplification type of attack. In a DHCP-serviced subnet, like a /20 with some 4000 available host IPs, we might have only 3000 being used at any one time, but we want to give enough headroom to accommodate fluctuations in DHCP usage. But that leaves those 1000 remaining, unused IPs unintentionally triggering lots of unnecessary ARP traffic. Specifically, what would be nice, is if there was a way to manipulate that ARP retry mechanism, from 4 retries, down to 2, to cut down on the noise. So far, I have not found a knob in Junos on the MX to do this. Am I missing something? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) 200 Ukrop Way Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] ARP resolution algorithm? Storage of MX transit packets?
I am trying to wrap my head around how the MX handles ARP resolution, and how it stores packets waiting to be transmitted, while waiting for ARP to resolve. If a transit packet comes into a router, on a PFE, and there is no corresponding entry in the ARP cache for the next hop, the routing engine gets involved in order to perform ARP resolution. From Junos 16.2, and moving forward, what should show up in rtsockmon -tn, to help locate the ARP query and next-hop programming, etc., once the reply is received? In the event an ARP request is generated, and no response is received within x(???) seconds, it looks like another request is sent out, or more, perhaps? If no response ever comes back, after some period of time, I presume the router will drop the transit packet. So, where is this transit packet being held, while it is waiting for the ARP reply to come back (if it even does come back)? How is the packet being stored? Are the packets stored via a hash in separate queues, of some sort, so that other transit traffic is not getting blocked? What is strange is that if there is a string of transit packets coming in, that have no corresponding ARP entry in the cache available, the way the RE sends out ARP requests does not exactly correspond to the order of transmit packets, as they come into the PFE. I would have expected a FIFO-like mechanism, but this does not seem to be the case. Does anyone have an explanation for this behavior, or better, how the ARP resolution algorithm is supposed to work, at the packet buffering level? Below is a half-second sample of what traffic comes into the router, that needs ARP resolution, followed by what ARP requests the RE actually sends out. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) 200 Ukrop Way Williamsburg VA 23187 Off the Wire, Just Before Traffic Enters the MX: 21:51:16.158064 IP 185.254.123.12.46185 > 100.64.101.189.3588: 21:51:16.297351 IP 92.63.194.38.47423 > 100.64.101.25.55126: 21:51:16.301438 IP 185.53.91.24.55823 > 100.64.101.88.5038: 21:51:16.385521 IP 185.176.27.34.58908 > 100.64.101.215.1288: 21:51:16.449858 IP 92.53.90.143.44499 > 100.64.101.192.282: 21:51:16.462591 IP 92.53.90.143.44499 > 100.64.101.181.282: 21:51:16.470221 IP 185.143.221.106.58528 > 100.64.101.1.4040: 21:51:16.492806 IP 92.63.194.38.47423 > 100.64.101.35.55126: 21:51:16.500132 IP 92.63.194.38.47423 > 100.64.101.58.55126: ARP Requests Coming Out of the RE: 21:51:16.158515 ARP, Request who-has 100.64.101.189 tell 100.64.101.3 21:51:16.227443 ARP, Request who-has 100.64.101.50 tell 100.64.101.3 21:51:16.227985 ARP, Request who-has 100.64.101.158 tell 100.64.101.3 21:51:16.297828 ARP, Request who-has 100.64.101.25 tell 100.64.101.3 21:51:16.327204 ARP, Request who-has 100.64.101.59 tell 100.64.101.3 21:51:16.327664 ARP, Request who-has 100.64.101.65 tell 100.64.101.3 21:51:16.427452 ARP, Request who-has 100.64.101.2 tell 100.64.101.3 21:51:16.428282 ARP, Request who-has 100.64.101.9 tell 100.64.101.3 21:51:16.473085 ARP, Request who-has 100.64.101.1 tell 100.64.101.3 21:51:16.527447 ARP, Request who-has 100.64.101.7 tell 100.64.101.3 21:51:16.528278 ARP, Request who-has 100.64.101.88 tell 100.64.101.3 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos Arp Expiration Timer Behavior & Active Flows
Thanks, Brian. Unfortunately, the MX policer is not granular enough to trim down the unwanted traffic enough, in one particular use case that I am dealing with. Excessive ARPs can easily overwhelm some downstream hosts, some more than others. Clarke Morledge College of William and Mary ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos Arp Expiration Timer Behavior & Active Flows
Thank you for the responses folks. I am trying to figure out a way to cut down on ARP traffic, particularly resulting from continued sweeps/scans running across our IP space from the InterWebs, particularly for IPs that are currently not in use. Simply jacking up the ARP aging-timer is not a completely trustworthy solution, since if you change the MAC address for a downstream host, the upstream router has to timeout its ARP entry before it learns the new downstream MAC... assuming the new downstream MAC does not do an ARP request of its own, right away. Has anyone worked with the ARP Cache Protection feature, release in 16.1? I was hoping to try to get this to work for me, but I am having a difficult time wrapping my head around the arp-new-hold-limit knob, and how it is supposed to work. https://www.juniper.net/documentation/en_US/junos/topics/example/example-arp-cache-protection-configuring.html It seems like the feature is designed more to protect the router from DDoS attacks, and not so much protecting downstream nodes from bogus ARP traffic. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) 200 Ukrop Way Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Junos Arp Expiration Timer Behavior & Active Flows
According to KB19396, "the Address Resolution Protocol (ARP) expiration timer does not refresh even if there is an active traffic flow in the router. This is the default behavior of all routers running Junos OS." The default timer is 20 minutes. I have confirmed this behavior on the MX platform. This does not seem very intuitive, as it suggests that a Junos device at L3 would stop in the middle of an active flow, to send an ARP request to try to refresh its ARP cache, potentially causing some unnecessary queuing of traffic, while the Junos device waits for ARP resolution. For an active flow, the ARP response should come back quick, but still it seems unnecessary. I would have thought that the ARP cache would only start to decrement the expiration timer, when the device was not seeing any traffic to/from ARP entry host. KB19396 goes onto say, "When the ARP timer reaches 20 (+/- 25%) minutes, the router will initiate an ARP request for that entry to check that the host is still alive." I can see that when the ARP timer is started initially, that it starts the expiration countdown, at this (+/- 25%) value, and not exactly at, say, 20 minutes, which is the default timer value. A couple of questions: (a) Is this default behavior across all Junos platforms, including MX, SRX, and EX? (b) Is there any other caveat as to when the Junos device will send out the ARP request, at the end of expiration period? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) 200 Ukrop Way Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] SRX300 DHCPD vs JDHCP - DHCP client issues
John Jensen, I can not comment on your specific issue, but I ran into a different DHCP issue with the new JDHCP-style format, that forced me to go back to the legacy format, which was working just fine. I am running 15.1X49-D100.6. Unfortunately, the PR is private, so there are no details. It looks like I will be waiting for a new version of JUNOS to come out before I can hope to use the new JDHCP method. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Negative ARP caching, on an MX router (again)
Thank you, Eduardo, I should have mentioned, that I was also trying to avoid dropping possibly legit ARP requests due to overaggressive policing. Clarke On Mon, 3 Apr 2017, Eduardo Schoedler wrote: Hi Clarke, Maybe arp policer problem? https://lists.gt.net/nsp/juniper/18201#18201 Regards, ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Negative ARP caching, on an MX router (again)
I would like to revisit a question that has come up several times on the list: https://lists.gt.net/nsp/juniper/57670 https://lists.gt.net/nsp/juniper/60797 I am trying to figure out a way to cut down on unnecessary ARP requests, being generated by MX routers, when someone comes sweeping across my L3 space, and triggering these unnecessary ARP broadcasts, for unused addresses. There is a possible solution of ARP sponging, but it would be really, really nice if there was something on-board with JUNOS to handle this, instead a rolling out a special purpose box: https://ams-ix.net/technical/specifications-descriptions/controlling-arp-traffic-on-ams-ix-platform Ideally, if JUNOS could do something like this: (a) Get a request from an incoming packet that would trigger an ARP request to go out. (b) If the router does not get a response back after X number of tries in Y number of seconds, put some type of dummy MAC address in the ARP cache that can be easily sinkholed. (c) Stay in this state for Z number of seconds, before flushing that dummy MAC address out of the cache, and then re-enabling ARP for that particular address. (d) In addition, the router would passively listen for packets coming into the L3 interface that would overwrite the dummy MAC address in the ARP cache with a (hopefully) legitimate MAC address, which would allow the process to exit out of the above state, without waiting for the above "Z" timer to expire. Is there any way that JUNOS on an MX could configured to do this? Enhancement request anyone? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] RE-S-X6-64G & ISSU?
I would like revisit a previous thread here: http://www.gossamer-threads.com/lists/nsp/juniper/58555 With the new RE-S-X6-64G routing engine, is part of the goal here to provide a method to eliminate the need for a redundant routing engine in the MX? Right now, we do employ redundant routing engines (1300s and 1800s), but mostly with ISSU in mind. The failure rate for routing engines, even with the older hard disk models, has been rather low, in our experience. So, the primary benefit has been to provide support for "hitless" upgrades via ISSU. Unfortunately, our experience with ISSU has been disappointingly mixed, at best. Granted, even with our problems with ISSU, our outages due to ISSU failures are much shorter, as compared to trying to upgrade a router with a single RE. But if the prospect of using VM technology on the RE-S-X6-64G includes the possiblilty to do ISSU with only a single RE, that would greatly reduce our requirement to supply dual REs in our MXs. It would be pretty sweet to free up that redundant RE slot for something revenue producing. Is this a legitimate expectation that I have; i.e. a single RE-S-X6-64G doing hitless ISSU, or am I just dreaming too much? Are there restrictions on the RE-S-X6-64G regarding this functionality, and if it is not here yet, is there any release train where it is expected to come? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD/IS-IS wait to re-establish adjacency after failure tweak knob?
Thanks, Mark. I looked into that, but the situation is such that the physical link itself remains up. The problem is that the L2 device in between is dropping packets. I should have clarified that. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] EVPN
Resurrecting an older thread, Amos at oasis-tech had this to say about his EVPN testing: "On single active multi homing, when the CE is a L2 switch. When failing the active link the switch will learn the remote destination MAC through the standby link very quickly. However, when the active link recovers and becomes active once again, the CE MAC table does not flush and the CE keep sending traffic to backup port that is now blocking. Obviously this is only applicable to uni directional traffic scenarios. On bi-directional scenarios MAC learning works like a charm." I am running up against this in our lab testing. It would be nice to find some way have to EVPN trigger a topology change via MSTP to cause the CE to flush its MAC table upon active link recovery. Getting two different L2 topologies; e.g. EVPN and MSTP, within the same L2 domain to sync up is a real pain. Anyone have any solutions to this problem? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] EVPN and DPC line cards: Deciphering JUNOS docs?
I am trying to wrap my head around the following note in this doc: http://www.juniper.net/techpubs/en_US/junos15.1/topics/example/example-evpn-multihoming-configuring.html "Note: Prior to Junos OS Release 15.1, the EVPN functionaliy support on MX Series Routers was limited to routers using MPC and MIC interfaces only. However, starting with Junos OS Release 15.1, MX Series Routers using DPCs can be leveraged to provide EVPN support on the CE device-facing interface. The DPC support for EVPN is provided with the following considerations: DPCs provide support for EVPN in the active-standby mode of operation including support for the following: a. EVPN instance (EVI) b. Virtual switch (VS) c. Integrated routing and bridging (IRB) interfaces DPCs intended for providing the EVPN active-standby support should be the CE device-facing line card. The PE device interfaces in the EVPN domain should use only MPC and MIC interfaces." At first, I assume that this means that EVPN is supported as of 15.1 for DPC line cards, in PE routers that are facing CE devices. Something that is a "CE device-facing line card" would be on a PE router. But then that last sentence confuses me by saying talking about "PE device interfaces" do NOT support the DPC line cards. Does anyone have their JUNOS documentation decoder ring handy to try to help interpret this for me? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Segment Routing ( SPRING )
I am working my way through _MPLS in the SDN Era_, and I really am intrigued by what I read about SPRING. I got burned a few years ago trying to deploy RSVP for traffic engineering purposes, and I ran into so many serious JUNOS bugs, I had to abandon it and use LDP to handle labels and stick with IGP metric manipulation to do some basic traffic enginering. Since I do not need bandwith reservation, I have actually appreciated the simplicity of LDP. But it looks like SPRING does pretty much the same thing, with less control plane overhead, which is even more attractive. I have a few questions for those who might know: (a) How mature is SPRING, considering that the ISIS IGP it is built on is well-established? (b) Are there any noticeable behavioral differences between SPRING and LDP implementations? (c) Do we have any idea when P2MP LSPs will come along with SPRING? Will it need to be coupled somehow with PIM? Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] MX/Trio, mirror IRB egress traffic with VLANs [OFFLIST]
I have not been able to figure this one out either. Very frustrating. If you do find an answer, please post to the list. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 -- Message: 5 Date: Fri, 13 Mar 2015 12:16:00 +0200 From: Saku Ytti s...@ytti.fi To: juniper-nsp@puck.nether.net Subject: [j-nsp] MX/Trio, mirror IRB egress traffic with VLANs Message-ID: 20150313101600.ga12...@pob.ytti.fi Content-Type: text/plain; charset=us-ascii Hey, Consider traffic flow: MPLS_core = irb = virtual-switch = IFL As far as I can see, there is no way to mirror this traffic with VLAN headers intact? a) irb output filter sees traffic, but not VLAN b) virtual-switch forwarding-options filter input does not see IRB traffic c) IFL family bridge filter output does not see IRB traffic Any solution to this problem? However the other direction: IFL virtual-switch - irb - MPLS_core a) IFL can mirror traffic with VLAN b) virtual-switch forwarding-options filter input sees traffic with VLAN c) IFL irb input won't see (expectedly) VLANs -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Full BGP table, one provider w/ 2 routers, slow forwarding convergence
I wanted to follow-up on the list with a report of what I did to try to resolve this issue. I have not had the opportunity to try to play tricks with routing policy to only populate the forwarding table with a default router while still receiving a full Internet router table via BGP. It is a little bit tricky for me since I also have another ISP to worry about that complicates our BGP design. However, what I did try was the load balancing method across both of my outgoing links using the multipath option described here: http://www.juniper.net/techpubs/en_US/junos13.3/topics/topic-map/bgp-multipath-unequal.html It breaks my plan to do active/passive between the two links for outgoing traffic. I am using MED to control preference for incoming traffic, which works well for an active/passive design for incoming traffic. I was able to reduce the failover time in the event of a fiber cut type of issue from 90 seconds down to about 10 or 20 seconds for most traffic. However, this only works if you are receiving the exact same copy of the Internet routing table from the upstream provider. In my case, there is enough variation in routes coming from the upstream provider between their two routers that I do not really get full load balancing going on in the forwarding table. In those cases, I am back to about 70 to 90 seconds to complete a failover transition for outgoing traffic. I did upgrade to 13.3R3 to try to resolve PR963060, but I am still seeing enough traffic loss that it makes me skeptical as to whether I have really hit this PR, if the issue has not really been resolved, or if something else is going on. I still think Junos should converge faster. Clarke Morledge College of William and Mary ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Full BGP table, one provider w/ 2 routers, slow forwarding convergence
I am trying to resolve a forwarding convergence problem in our existing architecture all doing BGP with full routing feeds with upstream providers. In one particular case, I am multihomed with one single provider (single AS) with two routers (A and B) existing in different locations for redundancy. My objective initially is an active/passive scenario, failing over to the backup link to this provider in the event of a fiber cut, using BFD to signal to BGP a problem. My first thought was to establish one external BGP group connecting to neighbor A, sending out my routes without much AS prepending and setting a high local preference for incoming routes. A second external BGP group connects to neighbor router B, using lots of AS prepending for my routes going out, and using a lower local preference for routes coming in. In testing the design, my advertisements going out get updated almost immediately with my upstream provider, per looking at their looking glass during a fiber cut. But on my end, even though BGP starts to change the preference for the incoming routes fairly quickly, it takes a long time to push the changes to the forwarding tables in the PFE. With the full Internet table, I have seen it take up to about 80 to 90 seconds for a few selected routes. My objective was to get the failover to complete in less than 20 seconds. Presumably, if I were only handling the default route, the solution would be trivial, but at this point I need to keep on receiving the full Internet table. Can I do what I need to do with some sort of BGP multipath load balancing, but with keeping my traffic engineering objectives intact? Below are some config snippets. Thanks for any suggestions/solutions. Clarke Morledge College of William and Mary Upstream Provider ASN: 65001 Upstream Provider Router A (Primary): 172.16.0.2 Upstream Provider Router B (Backup): 172.16.1.2 [edit policy-options policy-statement bgp-isp-router-b-out] term local-16 { from { route-filter 192.168.0.0/16 exact; } then { as-path-prepend 65002 65002 65002 65002 65002 65002 65002 65002 65002; accept; } } [edit policy-options policy-statement bgp-isp-router-a-out] term local-16 { from { route-filter 192.168.0.0/16 exact; } then { as-path-prepend 65002 65002 65002; accept; } } [edit policy-options policy-statement bgp-isp-router-b-in] term default { then { local-preference 285; accept; } } [edit policy-options policy-statement bgp-isp-router-a-in] term default { then { local-preference 290; accept; } } [protocols bgp] group isp-router-a { type external; import bgp-isp-router-a-in; export bgp-isp-router-a-out; peer-as 65001; bfd-liveness-detection { minimum-interval 999; multiplier 10; } neighbor 172.16.0.2; } group isp-router-b { type external; import bgp-isp-router-b-in; export bgp-isp-router-b-out; peer-as 65001; bfd-liveness-detection { minimum-interval 999; multiplier 10; } neighbor 172.16.1.2; ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Full BGP table, one provider w/ 2 routers, slow forwarding convergence
Amos, I am using an MX240, and I am aware of the MX80 platform issue when dealing with multiple BGP feeds. I have the newer 1800 RE, so I was hoping to completely avoid anything like that with a beefier RE, running 64-bit JUNOS. I do need the full Internet feeds for other reasons, but I am interested in the option to filter routes between RIB FIB to keep my FIB smaller, but send the full table downstream. What JUNOS knob does that? Do you happen to know the PR number on the full routing table and netflow issue? I am doing inline-jflow, so perhaps that may have something to do with it. Clarke Morledge College of William and Mary On Thu, 14 Aug 2014, Amos Rosenboim wrote: What model of router are you using ? What you are describing is a general problem of juniper routers, however it's really bad on the low-mid range routers, MX5-80, the 104 is slightly better but not very. The stronger REs are less prone for this, although the real solution is a serious change to RPD. Recent releases should have incremental improvements, although afaik the root cause was not corrected. There was also another similar issue that involved full routing table and netflow. I believe this one was corrected in one of the recent releases. Do you really need full routing table? Especially when both links are to the same ISP? There is also an option to filter routes between the RIB and FIB, so you can send the full table downstream but rely on a smaller set of routes for forwarding. Cheers, Amos ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Full BGP table, one provider w/ 2 routers, slow forwarding convergence
Roland Dobbins asked in response to my question here: Can I do what I need to do with some sort of BGP multipath load balancing, but with keeping my traffic engineering objectives intact? Why are you going for failover instead of active/active? With a failover scenario, X% of your capacity is going unused . . . Also, why multihome into the same upstream transit provider? A higher degree of resiliency is achieved by multihoming with multiple transit providers. Hi, Roland, It is a bit involved, but both of my paths up to router A and router B for this particular ISP are being shared with other customers. So I am trying to do some traffic engineering and shaping so that I do not interfere with what the other customers are doing on these shared pipes. As I mentioned earlier, I do have multiple providers, but I am mainly trying to solve the forwarding problem for my main provider where I am multi-homed to meet my traffic engineering requirements. Clarke Morledge College of William and Mary ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Full BGP table, one provider w/ 2 routers, slow forwarding convergence
I will probably try the FIB filtering idea, but I was curious to know if anyone has tried doing an active/passive scenario with a single provider multihomed on two different routers in using BGP multipath for load balancing. I see a document that describes Load Balancing BGP Traffic with Unequal Bandwidth Allocated to the Paths: http://www.juniper.net/techpubs/en_US/junos13.3/topics/topic-map/bgp-multipath-unequal.html But in the future, I would like to be able to better control the routes I send out to the upstream provider and not get them too involved with tweaking stuff on their end. Using BGP multipath, I think I can just use something like MED to control preference for incoming traffic, but I believe this is applied on the neighbor statement; that is, something that I can not control via policy when involving more than one neighbor for different routes that I am advertising. Please correct me if I am wrong. Does anyone have experience with this? Clarke Morledge College of William and Mary ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Multicast/Broadcast Packets going to EX CPU
-- Sebastian, We are using a combination of storm-control and firewall filters, just to throttle the multicast back. Nothing special. Since we are not officially supporting multicast applications, this has not really hurt us yet. Clarke * Clarke Morledge chm...@wm.edu [2014-03-06 16:42]: Sebastian, No, you are not alone on this issue. For a little more context, I have seen the same type of behavior associated with Apple Bonjour traffic related to Multicast DNS reported on this thread in November, 2013: http://www.gossamer-threads.com/lists/nsp/juniper/48269?do=post_view_flat#48269 Currently, we are implementing ways of limiting multicast. I am aware that this is more of a bandaid approach, but I have never heard a completely satisfactory explanation or solution for this behavior on the EX platform. Thank you for your reply, can you share a bit more about what countermeasures you are implementing? storm-control? firewall filters? If anyone comes up with some good answers, please inform the list. +1 Regards Sebastian ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] EX cpu performance under multicast replication load?
Chuck (or anyone else who might know): Would it be sufficient to say that handling of QoS markings might help to mitigate against this problem impacting CPU performance? But would it help in the situation where the switch itself needs to mark the packets for QoS? Would the marking (and thereafter the QoS processing) of the packets happen on the PFE before it hits the CPU? Clarke On Wed, 13 Nov 2013, Chuck Anderson wrote: If the multicast traffic is using a group that shares the same multicast MAC address as some control-plane protocol groups (224.0.0.X) then the RE CPU needs to get a copy of all those packets in case it needs to act on possible control plane traffic. This is a problem that most switches have. See: http://web.urz.uni-heidelberg.de/Netzdienste/docext/3com/superstack/3_0/3900/2i3igmp6.html On Wed, Nov 13, 2013 at 11:33:57AM -0500, Clarke Morledge wrote: I am seeing some bothersome CPU performance issues on EX switches, mostly on the less powerful units like the 2200s, when it comes to handling multicast. In practical situations, I do not see much multicast traffic in general, except on our campus we do get a lot of Apple Bonjour traffic related to Multicast DNS. Sometimes, a single host will go a little bonkers with repeated MDNS packets. In one case, I have seen where a flood of about 100 multicast packets per second, related to Bonjour, will cause the CPU on the lower end EX switches to spike up dramatically, resulting in loss of management of the switch during peak loads. For example, the switch will stop handling ICMP echo requests to its management IP, or it will miss RADIUS packets. Can someone walk me through the EX architecture a bit to tell me if this is expected behavior? I am assuming the EX CPU is actually handling the multicast replication of Ethernet frames received to be sent out other ports, but it seems like 100 packets per second should not be a big deal to worry over. So something looks awry. Oddly enough, I do not see any performance issues when straight-up broadcast traffic hits these kind of packet rates. To mitigate against this, I guess I could use QoS to prioritize management frames over user multicast data, but if the issue is about packet replication and not forwarding, I am entirely convinced that the standard marking and handling QoS parameters would be effective. Any ideas? ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] EX cpu performance under multicast replication load?
I am seeing some bothersome CPU performance issues on EX switches, mostly on the less powerful units like the 2200s, when it comes to handling multicast. In practical situations, I do not see much multicast traffic in general, except on our campus we do get a lot of Apple Bonjour traffic related to Multicast DNS. Sometimes, a single host will go a little bonkers with repeated MDNS packets. In one case, I have seen where a flood of about 100 multicast packets per second, related to Bonjour, will cause the CPU on the lower end EX switches to spike up dramatically, resulting in loss of management of the switch during peak loads. For example, the switch will stop handling ICMP echo requests to its management IP, or it will miss RADIUS packets. Can someone walk me through the EX architecture a bit to tell me if this is expected behavior? I am assuming the EX CPU is actually handling the multicast replication of Ethernet frames received to be sent out other ports, but it seems like 100 packets per second should not be a big deal to worry over. So something looks awry. Oddly enough, I do not see any performance issues when straight-up broadcast traffic hits these kind of packet rates. To mitigate against this, I guess I could use QoS to prioritize management frames over user multicast data, but if the issue is about packet replication and not forwarding, I am entirely convinced that the standard marking and handling QoS parameters would be effective. Any ideas? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] limitation to vrrp-group inheritance on MX?
Ben, Thank you for the explanation. I verified that it works through some testing. I guess I am just accustomed to the Cisco way of doing things, where you can have a whole group of IP subnets on one vlan all sharing the same VRRP address, including the facilitating of MAC address learning. In that approach, there is no need for having a separate VRRP MAC for each subnet on the same vlan. It just seems inefficient and unnecessary on Juniper's part. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] limitation to vrrp-group inheritance on MX?
It looks like there is a limitation as to the number of times you can inherit settings from a particular vrrp-group on a single interface, but is this correct? Assume you have a single vlan with multiple IP subnets configured. However, all you need is to have a single vrrp-group where all of the other IP subnets can inherit vrrp config information from, let's say, the vrrp group with the preferred address. For example: [edit interfaces irb unit 100] MX# show family inet { address 192.168.37.3/25 { preferred; vrrp-group 100 { priority 125; accept-data; virtual-address 192.168.37.1; } } address 192.168.38.3/25 { vrrp-group 101 { virtual-address 192.168.38.1; vrrp-inherit-from { active-interface irb.100; active-group 100; } } } address 192.168.39.3/25 { vrrp-group 102 { virtual-address 192.168.39.1; vrrp-inherit-from { active-interface irb.100; active-group 100; } } } } For each IP address configured on the IRB interface (associated with one particular vlan), you must have a DIFFERENT vrrp-group configured, even though the inheriting addresses are only effectively using the vrrp-group number as unique identifiers and place holders. If you try to use the SAME vrrp-group number for each address; e.g. 100, you get a configuration error upon commit: Duplicate interface: irb unit: 100 vrrp-group: 100 for address:. Vrrp has a limitation as to the nunmber of groups available per vlan, 255. Granted, having more than 255 addresses per interface is a lot, but it seems arbitrary that the MX limits you to only having 255 IP subnets per vlan that can use VRRP. Having a maximum of 255 VRRP active groups per vlan makes sense, as this is what the VRRP standard specifies, but when you have a bunch of basically inactive groups that inherit from one active group, it seems bizarre that Junos says, NOPE, you can only have a maximum of 254 placeholders for inactive vrrp groups per interface. Am I misunderstanding something here? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] ISSU timeouts on MX upgrades due to large routing tables?
I was curious to know if anyone has run into any issues with large routing tables on an MX causing ISSU upgrades to fail? On several occasions, I have been able to successfully do an In-Software-Service-Upgrade (ISSU) in a lab environment but then it fails to work in production. I find it difficult to replicate the issue in a lab, since in production I am dealing with lots of routes as compared to a small lab. Does anyone have any experience when the backup RE gets its new software, then reboots, but since it takes a long time to populate the routing kernel database on the newly upgraded RE that it appears to timeout? I have seen behavior like this with upgrades moving from 10.x to a newer 10.y and from 10.x to 11.y. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Logical tunnels on MPC2 and MICs
I am a little confused about logical tunnel-services configuration on the MPC2, for both the chassis platform and MX-80. Do you really need a MIC installed in the MPC if you want to configure a logical-tunnel (lt)? Part of me says you do not simply because the tunnel is happening on the PFE. Since the PFE is sitting on the MPC2 itself, the MIC would not be necessary. But then the other part of me knows that with the older non-Trio that the PFE is integrated with the physical interfaces, so perhaps a MIC in the Trio world is required. which I do not understand, since I do not get why the PFE needs a physical interface in order to get a tunnel to work. If someone could straighten me out, that would be great. Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] ability to turn USB port on/off for MX routing engine?
Morgan, All I need is the USB power. It is an optical bypass switch. I was trying to avoid the usb hub on a remote power strip solution :-) That usbconfig binary would have been nice to keep! Clarke Le 20 mars 2013 15:50, Morgan McLean wrx...@gmail.com a écrit : Just curious, what is it that's hanging off? Does it interact with the router at all, or is it purely just getting power from the USB port? If it needs to interface with the router and you just want it shut off...usb hub with its power socket going to a remote power device? lol. Morgan___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] ability to turn USB port on/off for MX routing engine?
To answer my own question, I found out from JTAC that the ability to turn off the power to the USB port on an MX routing engine is not possible because it is not supported. I thought Junos was built on FreeBSD. Aren't you supposed to be able to do just about anything you want with FreeBSD? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] ability to turn USB port on/off for MX routing engine?
This may seem like a totally random question, but does anyone know if there is a way to disable/enable power to the USB port that is built into the routing engine on the MX? I have an application where it would be useful to flip the USB power on and off. The USB device is not a hard disk so umount probably will not work. This is NOT to be confused with the request system power-off media usb command that powers down the router and then forces the system to read the USB for the next boot image. Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] MX L2 port mirror into VPLS instance?
Any ideas on how to do a layer2 port mirror on an MX whereby the output gets dumped into a VPLS instance? I was hoping to dump port mirror output into a VPLS instance, turn off MAC learning for that instance so that the mirror traffic gets flooded out across the VPLS domain to a remote analyzer. The documentation for the MX shows that you can dump the packet output out an external port, but nothing regarding VPLS flooding for the output. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] MX L2 port mirror into VPLS instance?
I should hasten to add that I was hoping to do this without having to burn up two revenue ports; i.e. send the traffic out an external interface and loop it back into another where my VPLS instance lives. If I can do this all internally on the MX, that would be the best. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 chm...@wm.edu ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Interface tail drops vs. ifOutDiscards
Really? Is there any know way to measure tail drops via SNMP with Juniper? In particular, I am wondering about the MX platform. That is really odd. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 Nick says: That is known issue. and very irritating issue for me. Please make sure that your SE knows that you are not happy with that. Maybe some time in the future, J will fix it. Nick On Fri, Jan 25, 2013 at 3:00 PM, Antti Ristim?ki antti.ristim...@csc.fiwrote: Hi, It seems that ifq tail drops don't increment IF-MIB::ifOutDiscards counter, whereas e.g. packets dropped by RED do. Has anyone else encountered this and is this an expected behaviour or a known issue? -Antti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] VPLS Multihoming
Luca, My question is - on PE2 is it normal for it to show the VPLS connections in a 'LN' (local site not designated) state, as shown below: PE2show vpls connections Layer-2 VPN connections: snip Legend for interface status Up -- operational Dn -- down Instance: VPLS-DirectNetworks Local site: IC2-VPLS (2) connection-site Type St Time last up # Up trans 1 rmt LN 2 rmt LN This page on the juniper site seems to show a similar output but I just wanted to confirm http://www.juniper.net/techpubs/en_US/junos12.2/topics/example/vpls-multihomed-example.html Yes, this is typical. And so on PE1, your primary site probably shows status RN for site 2 (remote site not designated). Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] port mirror to multiple ports on MX80 in inet6
Paul, Just to come full circle on that IRB issue and L2 port mirroring. From page 213 in Hanks and Reynolds _Juniper MX Series_: If the packet's L2 destination MAC matches the router's IRB MAC address Its important to note that any bridge family filters applied to the related Layer 2 IFLs, or to the FT [forwarding table] in the BD itself, are not evaluated or processed for routed traffic, even though that traffic may ingress on a Layer 2 interface where a Layer 2 input filter is applied. Thanks sounds pretty authoritative. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] port mirror to multiple ports on MX80 in inet6
Paul, In your last example, assuming that your cisco router is hanging off your mirror source port, ge-1/0/2, it makes sense from my experience that your x.x.158.13 x.x.158.194: ICMP echo reply shows up in your mirror output, as I mentioned earlier, but not the ICMP echo request in the other direction. The echo request enters your L2 configured port, but since it then crosses a subnet boundary by hitting your irb.100, the MX will not treat it as L2 any more for mirroring purposes. So if you do a Layer3 port mirror with irb.100 as your mirror source, you should be able to see the packet. Traffic coming out of the IRB egressing out the L2 mirror source port gets treated as L2, which is why the L2 mirror works in that direction. There is something about the way Integrated Routing and Bridging works that accounts for this, but I do not fully understand it. With respect to the vlan tag/un-tag, because you changed the vlan-id to 1000 in the bridge-domain, as the original packet had a vlan tag of 100, this changes the mirrored packet. It shows up on the mirror output as untagged because your encapsulation ethernet-bridge on the interface will not tag the packet. I use the encapsulation flexible-ethernet-services with flexible-vlan-tagging and I am able to change the vlan-id of the mirrored output if I need to do that. The other cases you describe have me scratching my head as to what is up, but I've seen other weird things with layer2 mirroring that do not make much sense to me. So as to why the behavior between x.x.158.13 and x.x.158.5 is reversed now is really puzzling, particularly since traffic in both directions should just be L2. It bugs me that the L2 port mirror examples in the web documentation are really poor. They have made some improvements recently, but Juniper really needs to step up and cover these different scenarios in detail. Typically, I need to set up a port mirror on the fly for a quick look, but unfortunately, I end up messing with JTAC for several weeks trying to get something to work that takes about 5 minutes on a Cisco platform. The flexibility of the Junos platform allows for some complex mirroring, which is great, but I have wasted a lot of time trying to get a handle on this port mirroring thing and still do not get it. where I can afford it, I just say Forget it, I'll stick with a tap. If you can make any better heads or tails out of this, I'd like to hear about it. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 On Tue, 23 Oct 2012, Paul Vlaar wrote: On 23/10/12 10:59 PM, Clarke Morledge wrote: --- My question for you would be if you have an IRB interface associated with the bridge-domain that your mirror source port is in, and if the ICMP traffic coming into the router is hitting that IRB. If that is the case, the MX will not treat the traffic coming into your IRB interface via your encapsulation ethernet-bridge as Layer2 traffic in this context, so it will not get mirrored. - There is indeed an IRB associated with the bridge-domain of the port to be mirrored: mx80 show configuration bridge-domains VLAN100 routing-interface routing-interface irb.100; show configuration interfaces irb.100 family inet { address x.x.158.1/26; } A traceroute from another router that is one L3 hop away from the MX80, to the IP address of the host connected to the interface that we're doing the port mirror on: cisco#traceroute x.x.158.13 Type escape sequence to abort. Tracing the route to 199.115.158.13 VRF info: (vrf in name/id, vrf out name/id) 1 x.x.158.193 0 msec 0 msec 0 msec 2 x.x.158.13 0 msec 0 msec 0 msec cisco# x.x.158.193 is the address of the point to point link at the MX80, and x.x.158.13 is the IP address of the mirrored host. So as far as I can see it's not hitting the irb.100 address, however it is doing this on the return, as it's the default gateway out of the host at x.x.158.13. But the return is where we catch the ICMP reply, so that part works. To be complete here, this is the L3 interface where the traffic comes in from the other router: mx80 show configuration interfaces ge-1/3/11 unit 0 { family inet { address x.x.158.193/30; } } And this is the FIB entry for the target host: mx80 show route forwarding-table destination x.x.158.13 Routing table: default.inet Internet: DestinationType RtRef Next hop Type Index NhRef Netif x.x.158.13/32 dest 1 0:1b:21:84:d7:a6 ucst 768 4 ge-1/0/2.0 Routing table: default-switch.inet Internet: DestinationType RtRef Next hop Type Index NhRef Netif defaultperm 0rjct 538 1 Routing table: __master.anon__.inet Internet: DestinationType RtRef Next hop Type Index NhRef
Re: [j-nsp] port mirror to multiple ports on MX80 in inet6
Paul, It occurred to me after my last testing of the L2 port mirroring feature that perhaps the issue involving traffic that also hits an IRB along the way could be related to behaviorial differences between different versions of Junos and different chipsets. In my previous tests that showed Layer2 port mirroring only working for egress packets and NOT for ingress packets destined for the IRB was on the MX 240 platform (not MX80) with I-Chip and therefore NOT Trio. At that time, I was also running 10.2. I tried the same type of setup again with I-Chip based hardware using 10.4R10.7 and now I can not get the layer2 port mirroring to pick up anything when there is an IRB involved. However, if I do the same type of configuration using an MX-80 (Trio) I get the type of results you got the first time: packets on ingress that hit the IRB get mirrored at layer2, but packets that egress from the IRB do not. This is using 11.4R5.5. Fortunately, using layer3 port mirroring off of the IRB does appear to work in all configurations/hardware that I have tested. But it sure is confusing. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] port mirror to multiple ports on MX80 in inet6
Paul, You asked: This is the interface which I want to mirror: mx80# show interfaces ge-1/0/2 description app3.igb0; encapsulation ethernet-bridge; unit 0 { family bridge { filter { input mirror; output mirror; } } } ... When I do a ping from a host on the internet, outside the node, to the IP address of the server that is connected to ge-1/0/1, I see the ping being answered. On the analyzer connected to ge-1/3/2 I do a tcpdump and I see only the ICMP echo reply: 15:53:04.415530 00:1b:21:84:d7:a6 80:71:1f:c6:34:f0, ethertype 802.1Q (0x8100), length 102: vlan 100, p 2, ethertype IPv4, x.x.x.13 x.x.x.226: ICMP echo reply, id 19022, seq 30, length 64 15:53:05.416447 00:1b:21:84:d7:a6 80:71:1f:c6:34:f0, ethertype 802.1Q (0x8100), length 102: vlan 100, p 2, ethertype IPv4, x.x.x.13 x.x.x.226: ICMP echo reply, id 19022, seq 31, length 64 Why do I not see the ICMP request going out of the port, and only the reply? --- My question for you would be if you have an IRB interface associated with the bridge-domain that your mirror source port is in, and if the ICMP traffic coming into the router is hitting that IRB. If that is the case, the MX will not treat the traffic coming into your IRB interface via your encapsulation ethernet-bridge as Layer2 traffic in this context, so it will not get mirrored. - Also, you asked: The interesting thing is that I do see the ICMP request when I ping from a host that is directly connected to the router, connected to a port that is in the same bridge-domain as ge-1/0/2: 16:02:24.160278 00:1b:21:86:a5:22 00:1b:21:84:d7:a6, ethertype IPv4 (0x0800), length 98: x.x.x.5 x.x.x.13: ICMP echo request, id 16139, seq 0, length 64 16:02:24.160391 00:1b:21:84:d7:a6 00:1b:21:86:a5:22, ethertype 802.1Q (0x8100), length 102: vlan 100, p 2, ethertype IPv4, x.x.x.13 x.x.x.5: ICMP echo reply, id 16139, seq 0, length 64 Note that the ICMP request is showing as untagged traffic, yet the reply is in VLAN 100. On the router, ge-1/0/2 is in a bridge-domain with VLAN id 100. No other ports have the 'mirror' filter applied. Anybody ever done L2 port mirroring on an MX80 or have a clue as to why the above is happening? -- With respect to the vlan tagging on the port mirror output interface, the L2 packet being mirrored will egress with the original vlan tag intact, no matter what vlan id you configure on the mirror destination interface. However, if you insert the vlan-id keyword into the bridge-domain configuration, you can manipulate the vlan tag that gets egressed out of your mirror destination port. But if the vlan-id in the bridge domain is the same as the vlan-id of the mirror destination port, the original packet vlan-id gets preserved on output. I have not tested this, but my guess is that this might also apply to packets being mirrored that are untagged at the source. Port mirroring on this platform is enough to make your head spin. I am working with 11.4R5.5 on an MX-80. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Krt queue issues
A very interesting thread. Does anyone have a good feel for how many BGP neighbors with a full routing table feed you can have before you start to hit this issue on the MX80? Are there other load factors involved? I assuming that the RE-1300 on the MX chassis units do not suffer from this, correct? As a workaround, could you have a script that brings up BGP neighbors in an orderly sequence after a reboot? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Best-site VPLS convergence feature in Junos 12.2?
I see that there is a new best-site feature in Junos 12.2 for improving the convergence time in VPLS multi-homed environments: http://www.juniper.net/techpubs/en_US/junos12.2/topics/example/vpls-multihoming-convergence-example.html The site-preference election method for determining the primary vs. backup site uses BGP signalling so that the PEs can select the highest value to indicate which site will actively handle traffic in order to prevent loops, thus making the non-primary site passive. In our experience, if you commit a PE configuration with a higher site value, VPLS converges quicker than when you commit a PE configuration with a lower site value. So perhaps the new best-site feature might help. But operationally, it looks like the old site-preference and best-site methods for determining the primary are pretty much the same. Am I missing something? Does the best-site method really improve convergence, and if so, how so? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Tricks for killing L2 loops in VPLS and STP BPDU-less situations?
We have had the unfortunate experience of having users plug in small mini-switches into our network that have the capability of filtering out (by-default) BPDUs while allowing other traffic through. The nightmare situation is when a user plugs in such a switch accidentally into two of our EX switches. Traffic will loop through the miscreant switch between the two EXs and without BPDUs it just looks like MAC addresses keep moving between the real source and the two EXs. In an MX environment running VPLS, this problem can happen easily as there are no BPDUs even to protect against loops in VPLS, particularly when your VPLS domain ties into a Spanning Tree domain downstream where your potential miscreant switch may appear. I am curious to know if anyone has come up with strategies to kill these loops for EXs running Spanning Tree and/or MXs running VPLS. Rate-limiting may help, but it doesn't kill loops completely. I am looking for ways to detect lots of MAC address moves (without polling for them) and blocking those interfaces involved when those MAC moves exceed a certain threshold via some trigger mechanism. Assume Junos 10.4R10 or more recent. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Tricks for killing L2 loops in VPLS and STP BPDU-less situations?
On Fri, 17 Aug 2012, Jensen Tyler wrote: Quick google for VPLS Multihoming found me this: http://www.juniper.net/techpubs/en_US/junos9.6/information-products/topic-collections/feature-guide/vpls-multihoming-bgp-signaling-solutions.html Jensen Tyler Sr Engineering Manager Fiberutilities Group, LLC Jensen, VPLS multihoming assumes you are intentionally building out a loop-free VPLS domain. My situation is when you have a downstream customer who unintentionally introduces a loop in their Layer2 domain that causes MAC learning table thrashing back inside your VPLS instance. Thanks for the pointer though. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Broadcast storm on M7i fxp0 kills the CFEB?
Phil, Actually, I am not surprised that this happened to you. The fxp0 interface is a funny animal. It isn't really as isolated from the rest of the box as you would think. Since all IP broadcast/multicast on layer3 interfaces get sent to the RE by default, if you get a loop that starts to pump out tons of broadcasts, then all of that traffic will start to crush the RE and/or the forwading path to the RE. It does not matter if the storm happens on regular interfaces or fxp0. The only way you can mitigate against this is with RE protection filters. For example, you can implement a policer on fxp0 that handles packet bursts on ingress. But I found it just as easy to enumerate which protocols and/or source ips need access to fxp0 and discard the rest using a firewall filter. I learned the hard way :-) You can follow this thread to find out what I went through: http://www.gossamer-threads.com/lists/nsp/juniper/31311 My experience has been with the MX, but I am pretty sure the same applies to the M7i. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos 10.4R8 on MX (PR 701928)
Just to close the circle on this and thanks to those who responded: I received information that says that PR 701928 is resolved in 10.4R9 and 10.4R10. For some reason that is still unknown, the Problem Report database is not being properly updated for fixes in the 10.4 code train --- at least for this particular PR. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Junos 10.4R8 on MX (PR 701928)
I am going back to an old thread regarding PR 701928 being introduced in 10.4R8 back in January or before. The PR description summary is DPC, ADPC, MS-DPC, MX-FPC and MX-DPC may restart with backtrace in ia_wpkt_next() routine, yet there is a workaround. Now that 10.4R10 is out, I am a little puzzled to see in the PR search notes, http://kb.juniper.net/KB22825, that this is resolved in 11.4R2 and 12.1R1. But nothing is mentioned about a fix in a 10.4x release.I would have thought that this would have been fixed in a 10.4x release by now, thus making the workaround unnecessary. Does anyone know if you still have to implement the workaround in 10.4R10? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Update on 10.4R9 stability for MX?
It has been a couple of months since the JTAC recommended Junos software versions has been updated for the MX. As of February, the recommendation was to use 10.4R8.5 for the MX, except that there is an issue related to BFD configurations on the DPC line cards. Supposedly, the fix is in 10.4R9. In looking at the release notes, there are some issues that have been resolved in the 11.x series but nothing noted yet for any future 10.4.x releases. Perhaps there are future 10.4.x versions planned to carry forward these fixes? I am curious to know about anyone's experience with 10.4R9 over the past few months. I have DPC only currently; i.e. no MPC hardware -- and no MultiServices. Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Interconnect two VRFs via L2 security box with redundant path
I have a design question to propose to the list. Suppose I have two VRFs in my MX routing core. Servers connect to one VRF (South) and the clients connect to the other VRF (North). I have a Layer2 security packet scrubbing box for inspecting traffic between my servers and clients. I have a sample network diagram: http://i.imgur.com/ZuOoC.png Here are my restrictions: a. I need to interconnect the North and South VRFs with the Layer2 security box physically at one of my two core routers (MX East). b. I also need to have a redundant path, preferably passing through the other core router (MX West). In the event that the Layer2 box dies, or if the MX East core router dies, unfortunately traffic will not get inspected but I will still have connectivity between the North and South VRFs via the MX West core router. c. Traffic is forced through the Layer2 box using dynamic routing protocols (I'd like to stay away from statics if I can). I would like to stick with IS-IS, but I could use BGP if needed for filtering purposes. I need to be careful not to introduce a routing loop between the two VRFs. The redundant link on MX West needs to be properly weighted such that it is completely passive except in the event that there is a failure at MX East and/or the Layer2 box. d. I have an MPLS infrastructure available in the core, so I could build a VPLS, L2 VPN, or L3 VPN if it would help. But I do want to keep things as simple as I can. How would you put together such a design? How would you implement the routing protocols between the VRFs? Would you use a logical tunnel at MX West to form the backup connection between the two VRFs? If you use vrf-import and vrf-export of routes (with auto-export) between the VRFs instead of a logical tunnel, how would you properly weight the routing information? Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Interconnect two VRFs via L2 security box with redundant path
Stefan, I was just hunting through your blog for ideas when I saw your post :-) Thanks for jumping in. A few responses in-line below. On Tue, 24 Apr 2012, Stefan Fouant wrote: If that adjacency goes down, a simple floating static (static route w/ higher preference than the dynamic BGP/IS-IS route) can be used pointing to next-table will do the trick. No need to used Logical-Tunnels or use auto-export. If my two routers were directly connected all of the time, this would be fine. But I'm also thinking of the case of when there might be another L3 hop between the two routers. I guess I could insert another floating static on the third router, but that just seemed to add a little more complexity to me. I was hoping for a way to just let the dynamic routing protocols do the work for me instead of fooling with a bunch of statics with filter-based forwarding. Don't get me wrong, I like FBF. I was just hoping to leverage dynamic routing more. Of course, in your case you've got not just two VRFs but also an East and West path which further complicates things - why not just connect the MX West device into your L2 Packet Scrubber as well and keep things the same on both the East and West device so that you can take full advantage of two planes. This will keep configurations uniform regardless of whether traffic comes in on the East or West devices. I should have given the reason why I do not put the L2 scrubber between the two routers: conservation of fiber. I already have fiber connecting the routers in different wiring centers for traffic that does not need to be scrubbed. Chewing up another set of strands is much more expensive than simply connecting both sides of the L2 scrubber to just one router in the same rack. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] SNMP OID for sessions number
In response to: what is the right SNMP oid/MIB variable for monitoring of sessions number on J/SRX box? Try this: jnxJsSPUMonitoringCurrentFlowSession which is available in the mib-jnx-js-spu-monitoring MIB. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Monitoring SRX redundancy groups via SNMP
Morgan, I had not noticed this behavior before, but I get the same results on a 3400 cluster. I think what is throwing you off is that the jnxRedundancyTable was designed to apply to Routing Engines, such as on the MX platform. Since you can have multiple Redundancy Groups, etc. on the SRX, the redundancy feature on the SRX doesn't line up well with what this MIB was designed to do. Let's compare your output from the SRX with an MX router with dual REs: Your SRX cluster: JUNIPER-MIB::jnxRedundancyContentsIndex.9.1.0.0 = INTEGER: 9 JUNIPER-MIB::jnxRedundancyContentsIndex.9.3.0.0 = INTEGER: 9 JUNIPER-MIB::jnxRedundancyL1Index.9.1.0.0 = INTEGER: 1 JUNIPER-MIB::jnxRedundancyL1Index.9.3.0.0 = INTEGER: 3 JUNIPER-MIB::jnxRedundancyL2Index.9.1.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyL2Index.9.3.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyL3Index.9.1.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyL3Index.9.3.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyDescr.9.1.0.0 = STRING: node0 Routing Engine 0 JUNIPER-MIB::jnxRedundancyDescr.9.3.0.0 = STRING: node1 Routing Engine 0 JUNIPER-MIB::jnxRedundancyConfig.9.1.0.0 = INTEGER: master(2) JUNIPER-MIB::jnxRedundancyConfig.9.3.0.0 = INTEGER: master(2) JUNIPER-MIB::jnxRedundancyState.9.1.0.0 = INTEGER: master(2) JUNIPER-MIB::jnxRedundancyState.9.3.0.0 = INTEGER: master(2) JUNIPER-MIB::jnxRedundancySwitchoverCount.9.1.0.0 = Counter32: 0 JUNIPER-MIB::jnxRedundancySwitchoverCount.9.3.0.0 = Counter32: 0 JUNIPER-MIB::jnxRedundancySwitchoverTime.9.1.0.0 = Timeticks: (0) 0:00:00.00 JUNIPER-MIB::jnxRedundancySwitchoverTime.9.3.0.0 = Timeticks: (739204) 2:03:12.04 JUNIPER-MIB::jnxRedundancySwitchoverReason.9.1.0.0 = INTEGER: neverSwitched(2) JUNIPER-MIB::jnxRedundancySwitchoverReason.9.3.0.0 = INTEGER: neverSwitched(2) An MX with 2 REs: JUNIPER-MIB::jnxRedundancyContentsIndex.9.1.0.0 = INTEGER: 9 JUNIPER-MIB::jnxRedundancyContentsIndex.9.2.0.0 = INTEGER: 9 JUNIPER-MIB::jnxRedundancyL1Index.9.1.0.0 = INTEGER: 1 JUNIPER-MIB::jnxRedundancyL1Index.9.2.0.0 = INTEGER: 2 JUNIPER-MIB::jnxRedundancyL2Index.9.1.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyL2Index.9.2.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyL3Index.9.1.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyL3Index.9.2.0.0 = INTEGER: 0 JUNIPER-MIB::jnxRedundancyDescr.9.1.0.0 = STRING: Routing Engine 0 JUNIPER-MIB::jnxRedundancyDescr.9.2.0.0 = STRING: Routing Engine 1 JUNIPER-MIB::jnxRedundancyConfig.9.1.0.0 = INTEGER: master(2) JUNIPER-MIB::jnxRedundancyConfig.9.2.0.0 = INTEGER: backup(3) JUNIPER-MIB::jnxRedundancyState.9.1.0.0 = INTEGER: master(2) JUNIPER-MIB::jnxRedundancyState.9.2.0.0 = INTEGER: backup(3) JUNIPER-MIB::jnxRedundancySwitchoverCount.9.1.0.0 = Counter32: 1 JUNIPER-MIB::jnxRedundancySwitchoverCount.9.2.0.0 = Counter32: 0 JUNIPER-MIB::jnxRedundancySwitchoverTime.9.1.0.0 = Timeticks: (655) 0:00:06.55 JUNIPER-MIB::jnxRedundancySwitchoverTime.9.2.0.0 = Timeticks: (0) 0:00:00.00 JUNIPER-MIB::jnxRedundancySwitchoverReason.9.1.0.0 = INTEGER: userSwitched(3) JUNIPER-MIB::jnxRedundancySwitchoverReason.9.2.0.0 = INTEGER: other(1) The differences are apparent here, as the SRX just shows a single Routing Engine 0. That being said, it might be really helpful for Juniper to expand/modify this mib to support SRX clusters, or put in a new mib. But I am not aware that they have done this --- at least on the flavor of 10.4 that I am running or later. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Filter-based forwarding outside of inet.0?
Thanks to Stacy and Hendri, I got this to work perfectly! This really helped. Since it does not hurt to have more examples (as they are non-existent in the Junos docs for this particular type of application - Boo Hoo!!!), I am including the recipe/configuration solution below.. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 DefaultRoute via 192.168.0.1 ^ | | xe-11/0/0.40 | Downstream: 192.168.99.2 xe-9/0/0.40 VirtualRtr | irb.42 | | v Hijack via 192.168.255.1 By default, I have a static route in a routing instance (VirtualRtr) sending the default route to 192.168.0.1. I want to hijack traffic matching a particular filter and send the traffic to a different next-hop, 192.168.255.1. For you Cisco types, this is basically equivalent to using a route-map for setting the next hop: route-map VirtualRtr-Redirect permit 100 match ip address hijack-acl set ip vrf VirtualRtr next-hop 192.168.255.1 Whereas in the Cisco world, you would need to create an ACL and apply that with the route-map to the incoming interface, in Junos you create a filter and apply the filter to the interface: [edit firewall family inet filter fbf-redirect-filter] term t1 { from { address { 192.168.99.2/32; } } then { routing-instance fbf-test; } } term t2 { then accept; } [edit interfaces xe-9/0/0 unit 40] vlan-id 40; family inet { filter { input fbf-redirect-filter; } address 192.168.99.1/30; } At this point, Junos is more complex as it adds a layer of abstraction with the concept of rib-groups. You create your rib group by importing FIRST the table belonging to your virtual router and SECOND the table for the forwarding instance that has the next-hop specified: [edit routing-options] rib-groups { fbf-rib-test { import-rib [ VirtualRtr.inet.0 fbf-test.inet.0 ]; } } So here is the forwarding routing instance that defines the next-hop IP. But you'll need to make sure you can resolve the next-hop, so you associate the interface-routes with the rib-group you've created within the virtual routing instance: [edit routing-instances fbf-test] instance-type forwarding; routing-options { static { route 0.0.0.0/0 next-hop 192.168.255.1; ## PBR-like next-hop } } [edit routing-instances VirtualRtr] instance-type virtual-router; interface xe-9/0/0.40; interface xe-11/0/0.40; interface irb.42; routing-options { interface-routes { rib-group inet fbf-rib-test; static { route 0.0.0.0/0 next-hop 192.168.0.1; ## Normal next-hop } } In my case above, the 192.168.255.1 is hanging off of the irb.42 interface. Everything resolves in the routing tables: show route table VirtualRtr 0.0.0.0/0 *[Static/5] 25w4d 07:20:38 to 192.168.0.1 via xe-11/0/0.40 show route table fbf-test 0.0.0.0/0 *[Static/5] 00:54:31 to 192.168.255.1 via irb.42 And also you can verify the forwarding entries (my IRB is part of a vpls interface, hence the reference to the lsi): show route forwarding-table table VirtualRtr Routing table: VirtualRtr.inet Internet: DestinationType RtRef Next hop Type Index NhRef Netif defaultuser 0 0:23:9c:10:10:40 ucst 183639 xe-11/0/0.40 defaultperm 0rjct 643 2 0.0.0.0/32 perm 0dscd 641 1 show route forwarding-table table fbf-test Routing table: fbf-test.inet Internet: DestinationType RtRef Next hop Type Index NhRef Netif defaultuser 0 0:10:db:ee:10:0ucst 4721 3 lsi.1048729 defaultperm 0rjct 7005 2 0.0.0.0/32 perm 0dscd 6937 1 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Filter-based forwarding outside of inet.0?
I am still trying to wrap my head around FBF, and I am stuck on how to achieve a Cisco-like PBR forcing a packet that matches a set of conditions to go to a different next-hop inside a VRF. The problem I have is when the new next-hop can only be resolved within the VRF, NOT the default routing instance (inet.0). Let's say I am trying to create this forwarding instance to change the default route: [edit routing-instances fbf-test] HonkinBigMx# show instance-type forwarding; routing-options { static { route 0.0.0.0/0 next-hop 192.168.255.1; } } I need to create a rib group where 192.168.255.1 can be resolved (correct?). It can be resolved in a virtual routing instance (a VRF) called test.inet.0 where I need to insert via a filter the changed default route next-hop for PBR forwarding purposes. THe 192.168.255.1 can not resolve in inet.0 because it does not live there. If I try to create a rib group: interface-routes { rib-group inet fbf-rib-test; } rib-groups { fbf-rib-test { import-rib [ fbf-test.inet.0 test.inet.0 ]; } } The Junos compiler complains: [edit routing-options interface-routes] 'rib-group' fbf-rib-test: primary rib for instance master was not found in ribgroup configuration. error: configuration check-out failed I try to define the interface-routes at in the test.inet.0 routing instance stanza (which is where I think it should be defined anyway), and I get a similar complaint. In reading the docs, they insist that I must import inet.0 into the rib group, even though the next-hop can not be found to resolve there. Furthermore, I can only define a rib-group in the default routing instance part of the config and not in the routing-instance part of the config. What am I missing here and/or how can I workaround this limitation? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] In Search of the Optimal RE Protect Filter - A Journey
Daniel, I would love to be proven wrong on this, but I do not think you can use family any filters on the lo0 interface.You can only use family inet filters, and presumably you could use family inet6 (haven't tested that). Other filters do not work since the packet headers probably get stripped off before hitting the RE. In other words, you can not look at ARPs, spanning tree, or any other non-IP stuff coming into the RE via the loopback interface. At least, I haven't figured out a way to do that on the MX platform. You would have to grab that using bridge type filters on L2 interfaces on your platform. Pretty annoying if you ask me. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] snmp count for arp policer?
To bring some closure to this thread, it appears that the ARP policer counters for SNMP access have been fixed in Junos 10.4R6. However, this is still only helpful for tracking ARP events exceeding your policer threshold. As Stefan pointed out to me, if you have family bridge interface on an MX, you can implement a family bridge filter to look for ether-type arp and count that way. Unfortunately, if you have VPLS running and the only interface you have in your VPLS instance is an IRB, this will not help you. I guess the only workaround is to put your family bridge filter with the counter on your remote PEs to do your counting for you on your ingress/egress ports into the VPLS cloud. Not a very elegant solution, but better than nothing. Otherwise, configuring the appropriate threshold for your ARP poliicer is a lot of guesswork. Junos is a great solution, but visibility into what is going through the routing platflorm is lacking in some areas. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] In Search of the Optimal RE Protect Filter - A Journey
directed broadcast packet; i.e. destined for the broadcast address of an IP subnet, it does NOT enter the RE via the loopback address for the routing instance where it was received. Instead, it enters the RE via lo0.0, assuming you have lo0.0 in the global routing instance. Tricky. Tricky. Well, I hope this all helps someone. If someone can clarify and/or improve on this, please let me know. I had to learn the hard way. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] RSVP to LDP migration
When we initially configured our RSVP infrastructure we ran into a big problem since the LSPs/FastReroute were not being handled properly over aggregated Ethernet links. So I had to fairly quickly convert everything over to LDP so that I could work on the RSVP/link-aggregation issue later. In my experience, you can easily configure LDP to run alongside RSVP and not tunnel over RSVP. I simply followed Garrett's recipes in the _JUNOS Cookbook_, along with the Junos documentation, for basic LDP setup and it was pretty straight forward -- no surprises. To enable LDP to be prefered, I just changed the route preference on RSVP on all of my routers, without having to disable protocol rsvp. This was with Junos 9.6 on the MX platform. I do not have a good sense of outage time, however, simply because my RSVP stuff wasn't really working properly to begin with. The fact that LDP came up and traffic started flowing immediately was good enough for me at the time :-) Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) chm...@wm.edu ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] snmp count for arp policer?
On an IP interface (on a router like the MX), you can configure filters to count different types of IP packets. But there does not appear to be a way to count ARP packets, since they do not have an IP header. I would like to be able to have some type of ARP packet counter per interface that I can query with SNMP, just like you would with the IP counters via filters that can be programmed into the router hardware. The closest thing I can find that might do it is using an ARP policer. Unfortunately, this will only catch packets that hit some limit on your policer. This is better than nothing, but not great. From the CLI, you can look at the number of hits on the __default_arp_policer__, which I assume will get superceded by any interface specific ARP policer. In this context, the show policer output via the CLI is helpful: show policer Policers: NameBytes Packets __default_arp_policer__ 22143436345 330586727 But I do not know how to collect this information via SNMP. Does anyone have any clues on how to do this, aside from scripting it out via junoscript and the utility mib? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] MX loopbacks, routing instances and broadcast/unicast to RE
This is a side issue related to my last MX loopback filter and monitor traffic thread: I am trying to understand how traffic on different routing instances (virtual routers, VRFs) get picked up on different loopback interfaces on the MX. I am trying to design appropriate RE-protect filters, but it isn't intuitive to me as to how this works on this platform. For example, let's say I have the global routing instance, plus two VRFs (A and B, each one is a separate routing instance): [edit interfaces lo0] root@MX-Rtr# show unit 0 { description This interface belongs to the Global routing instance; family inet { filter { input re-protect-global; } address 192.168.0.1/32; } } unit 1 { description This interface belongs to VRF A routing instance; family inet { filter { input re-protect-vrfa; } address 192.168.100.1/32; } } unit 2 { description This interface belongs to VRF B routing instance; family inet { filter { input re-protect-vrfb; } address 192.168.200.1/32; } } Here is what I am seeing: for unicast traffic destined to the RE, traffic hits the loopback interface according to the appropriate routing instance; e.g. traffic coming into the MX on the global routing instance destined to the RE is seen by the filter re-protect-global, traffic coming in on VRF A is seen by the re-protect-vrfa filter, and traffic coming in on VRF B is seen by the re-protect-vrfb filter. The same logic applies to multicast traffic. For example, if you run OSPF in different routing instances, the appropriate filters per routing instance will see the appropriate OSPF multicast traffic. Makes sense. However, broadcast traffic is handled differently. First, I have recently learned that Junos takes the OPPOSITE default position than Cisco IOS does on their 6500/7600 platforms. By default, Cisco does not pass on directed broadcast to the Supervisor. Junos, on the other hand, sends all direct broadcast to the RE by default. Secondly, this broadcast traffic is ALWAYS seen on the re-protect-global filter -- no matter what routing instance the traffic entered the router on. So, directed broadcast on VRF A does NOT get seen by re-protect-vrfa. Directed broadcast on VRF B does NOT bet seen by re-protect-vrfb. Instead, you will always see that traffic on the re-protect-global filter. This appears to be true whether or not you are looking at directed broadcast on a sub interface or on IRBs. So, I have two questions: (1) why does Junos send directed broadcast to the RE by default, and (2) why does directed broadcast traffic show up on lo0.0 irrespective of the arriving routing instance? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] MX loopback filter and monitor traffic
I have a question about how the monitor traffic capability works on the loopback interface, particularly with respect to a filter. If write a filter, such as under a firewall family inet filter re-protect stanza, and apply it to the loopback address, unit 0: set interfaces lo0 unit 0 family inet filter input re-protect I can see traffic hitting the filter, if I have any counters configured in the filter. I can see that the traffic coming into the filter is getting to the RE via any IRBs or other layer 3 interfaces that are terminated on the MX. I can do a monitor traffic on any of these layer 3 interfaces on the input side and see the relevant traffic (to and/or from the RE). However, if I do a monitor traffic on the loopback interface itself, I see nothing: MX monitor traffic interface lo0.0 no-resolve no-domain-names verbose output suppressed, use detail or extensive for full protocol decode Address resolution is OFF. Listening on lo0.0, capture size 96 bytes ^C 0 packets received by filter 0 packets dropped by kernel If all of the traffic that comes into the router to the RE via these exposed Layer3 interfaces eventually makes it way to the RE via the loopback address, at unit 0, why is that the monitor traffic command does not show me anything?Why is the loopback interface so special? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Avoid route loop for joining IS-IS/OSPF areas with redundancy?
I am trying to solve some problems and I need a little sanity check. I have some routers that only support OSPF and I am trying to integrate this OSPF area/autonomous-system into a level2 IS-IS area/autonomous-system. Simplified, it looks like this: IS-IS OSPF-OSPF RTR B RTR D / | \ / | \ /|\ IS-IS | OSPF RTR A | RTR F --static-route 0.0.0.0/0-my ISP \ |/ \| / \ | / IS-IS / OSPF---OSPF RTR C RTR E A couple other things to note: a. there is no Level 1 IS-IS here, just Level 2. b. OSPF is area 0.0.0.0. c. IS-IS wide metrics only, so no real distinction between internal and external routes with IS-IS. d. the static route 0.0.0.0/0 pointing to my ISP needs to be propagated into the IS-IS autonomous system, and I want to rely on OSPF costing on the interfaces to traffic engineer the path for the default route, while still allowing for redundancy. e. to avoid looping in general, I tag routes redistributed from IS-IS into OSPF and then reject routes from OSPF to be redistributed into IS-IS matching that tag. The same logic applies for routes from OSPF to IS-IS. Here is the first problem: If I stay with the default Junos route preferences and let's say that one of the ASBRs (Router B or C) goes down and comes back up, I'll get a routing loop for the default route. Since wide-metrics forces IS-IS to forget about the internal/external distinction, the default route gets lost between Routers B and C. Following the advice of Herrero and Van Der Ven in _Network Mergers and Migrations_, it should be sufficient to prefer OSPF internal AND external routes above IS-IS. For example, I could drop the IS-IS level 2 internal preference from 18 down to something below the default OSPF external route preference of 150 to something like 155. I would need to do this on both routers B and C. This would force the OSPF external route for 0.0.0.0/0 to win over IS-IS at the border routers, B and C. But here is a second problem: If I do not have OSPF configured directly between Routers B and C, I could get a suboptimal routing situation. For example, let's say I have a loopback address on router A and the path from A to C is shorter than the path from A to B. Router B would then see router A's loopback as best advertised through OSPF via Router D. Ugh. So perhaps I just need to configure OSPF between B and C. This still isn't the most optimal method, because now that loopback address for A is best reachable from router B through router C, even though A is right next door. I could just extend OSPF all the way over to router A and clean that up, even though I really am trying to move off OSPF as soon as possible to simplify life, but I guess I can live with it temporarily. I just need to remember to apply the same route preference settings and redistribution routing logic with tags on router A as I have done on routers B and C. In the future, when I want to replace the OSPF-only routers with routers that support OSPF and IS-IS, I can simply go from router to router and reverse the route preferences for IS-IS and OSPF, making IS-IS better, and then I can remove OSPF altogether. Anyway, am I on the right track here, or am I forgetting something really important? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] JUNOS and MS RPC
Glenn said: Is anyone running MS products through SRX firewalls? How are you getting RPC to work? According to engineering, the ScreenOS ms-rpc-any isn't included in JUNOS, although, I do see the ALG catching the info based off of endpoint mapper sessions. --- Glenn, I have been struggling with the MS-RPC ALG for weeks now in version 10.1R4 without any success. My workaround has been to leave the entire range of ephemeral ports above 1024/tcp open, which isn't ideal. What I have been able to learn is that in addition to allowing the control session for RPC to go through via the junos-ms-rpc default application, you have to also specify the application for the dynamic port. In my case, the UUID for my MS RPC application does not have a corresponding default defined in the hidden junos-defaults config group, so I have to define my own, ms-rpc-epm-dynamic, as in my example below. Here is how I found out what my version of Junos has defined for the defaults: show configuration groups junos-defaults | find junos-ms-rpc application junos-ms-rpc-tcp { term t1 alg ms-rpc protocol tcp destination-port 135; } application junos-ms-rpc-udp { term t1 alg ms-rpc protocol udp destination-port 135; } # # Microsoft RPC EPM (End Point Mapper) # application junos-ms-rpc-epm { term t1 protocol tcp uuid e1af8308-5d1f-11c9-91a4-08002b14a0fa; } etc Here is a snippet of the type of config I have been using (I am assuming this is all TCP, not UDP): policy Test-Inbound { match { source-address Campus; destination-address MS-RPC-Servers; application [ ms-rpc-epm-dynamic junos-ms-rpc-tcp ]; } then { permit; log { session-init; session-close; } } } application ms-rpc-epm-dynamic { term t1 protocol tcp uuid ----; } Unfortunately, the SRX is dropping the dynamic session (via subsequent deny policy, or the default deny policy) about a half a dozen or a dozen packets into the session. And like you I see that the SRX is cotching the endport mapper sessions correctly, but it just isn't maintaining the context correctly throughout the life of the dynamic connection. Supposedly, according to JTAC, there are MS RPC ALG fixes in 10.4R3, but I have not tested it that far yet. I'd be curious to know if you have found any success. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] SRX policy action to inject a route in a table??
The SRX policy actions (count, deny, log, permit, reject) are helpful, but a little limited. I am wondering if there might be a way to enforce a special action such as take the ip address of the source packet and inject it into a routing table of some sort. What I have in mind is some way to use the SRX to grab the IPs of misbehaving hosts and put the address in a RIB. Then I can use routing policy to put the route into a BGP feed to a border router that would null route traffic to and from that IP address using tricks with Unicast Reverse Path Forwarding. This would be like using the SRX has a simple honeypot to then enforce a host address block at the network perimeter. Of course, there are all sorts of dangers and challenges involved, such as making sure you don't end up DOS'ing the SRX yourself, etc. But I still wish there was a clean way to proactively do this. My other option is to just log the packet to somewhere else, parse the log, then grab the IP of the offender and populate my BGP feed that way. But this could get complicated, too. It could be a handy feature to do all of this task on the SRX. Anybody have any ideas on this? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Third Edition of Minei Lucek MPLS-Enabled Applications
I see that there is now a new edition of Ina Minei's and Julian Lucek's _MPLS-Enabled Applications: Emerging Developments and New Technologies_ out now. http://www.amazon.com/MPLS-Enabled-Applications-Developments-Technologies-Communications/dp/0470665459 I have read much of the second edition and it is probably the best one-stop text on MPLS protocols and theory that I have come across. I only wish there were JUNOS configuration and debugging cross-references to go along with it to make it more practical. Anyway, I was wondering if anyone on the list has read the new third edition yet. I'd be curious to know if it would worth getting over and above the second edition. Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Monitoring interface counters on J/SRX
Dale, I have run into the same issues you have with the interface counters. The physical interface counters give you the most amount of detail, not only with respect to errors, discards, etc,, but with multicast/broadcast vs. unicast, too. The logical interface counters only help with basic packet and octet counters. And by logical interface this also applies to things like Integrated Routing and Bridging (IRB) interfaces. As you pointed out, you can measure bandwidth just fine for logical interfaces, but pretty much everything else just reads as zero in the MIBs. This applies to the standard IF-MIB as well as the proprietary jnxMib. You can use filters for counting purposes on logical interfaces, and that helps make up for what isn't available by default, but from what I have seen filters do not dig down into the physical error realm. Hope that helps somewhat. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 On Sat, 12 Feb 2011, Dale said, Message: 3 Date: Sat, 12 Feb 2011 20:10:02 +1100 From: Dale Shaw dale.shaw+j-...@gmail.com To: juniper-nsp juniper-nsp@puck.nether.net Subject: [j-nsp] Monitoring interface counters on J/SRX Message-ID: aanlktinqs23_oq58gnpcuvrs+y2kppjzyoen-hdcp...@mail.gmail.com Content-Type: text/plain; charset=ISO-8859-1 Hi, [ disclaimer: I don't claim to be a network management expert but I can spell SMTP* ] In our network, we capture, graph and report on the usual set of interface counters - errors and discards in/out, octets in/out, packets in/out and so on. Most of the in-house skills are with Cisco products but we're getting a lot better with J. I'm trying to come to terms with the difference between the physical (ifd) counters and the logical (ifl) counters for both 'family inet' and 'family ethernet-switching' interfaces. I'm facing a few dilemmas; 1) on Ethernet interfaces shaped to sub line rate, only logical units report the 'correct' bandwidth so in order for our graphs to scale correctly we either have to perform interface utilisation reporting against the logical unit(s), or manually configure our NMS with the shaped/provisioned bandwidth value. In most cases we're only using one logical unit (unit 0). 2) I noticed recently that our NMS is collecting error/discard counters from logical interfaces, and these appear to be always zero. A few CLI checks around the network seem to prove the theory - error/discard counters must be collected from the physical interface. What's the right thing to do here? If there is no right/wrong, what works for you? Anyway, I suppose what I'm really looking for is some generic advice on how to monitor interfaces in Juniper routers and switches -- are the standard MIBs OK? I just noticed there appear to be interface counters in jnxMibs. Do you grab interface error/discard counters from the physical only? Is there a way to populate logical interface error/discard counters with the underlying physical interface's counters? (it seems the logical interface does not track errors/discards, which I guess makes sense). More than happy to be pointed at good documentation. Cheers, Dale * That was a hilarious joke. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Power budget detail command for data center SRX?
I have a pair of SRX 3400s that are experiencing some odd hardware problems. I suspect that there could be a power issue in that an individual system is unable to power up a line card due to inadequate power. Unfortunately, I asked JTAC and they could not tell me of any JUNOS command for describing how much power is being drawn or dedicated to particular line card components; i.e. an equivalent to show power on a Cisco 65xx/76xx chassis. Does anyone know of any JUNOS command to obtain power draw details on a data center SRX? Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Interface Counters | RVI vs. IRB
Bill Blackford said: I have some customers I bring in on tagged vlans through a single physical interface and terminate on a router at layer3 in RVIs. I can't poll interface stats (bandwidth) in RVIs, SVIs, etc. Since I will be migrating these from an EX to an MX and thus terminating at layer3 in IRBs, can I poll bandwidth stats on an IRB? Yes, you can get some interfaces stats, except if you need to get non-unicast counters working, the standard IF-MIB counters will not work for you. However, as a workaround, I understand that you can create a filter with counter for multicast and/or braodcast and you should be able to query it on a Juniper proprietary MIB. I haven't tested the workaround, but I was moving to IRBs on the MX and ran into this issue. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Optimal BFD settings for BGP-signaled VPLS?
Thanks for the responses thus far concerning BFD for BGP-signaled VPLS. Some have asked about using RSVP: we originally set out to use RSVP, but we ran into a number of bugs/issues in our environment. We aren't able to take advantage of all of the bells and whistles with RSVP anyway, and LDP was A LOT simpler to deploy. Perhaps as some of these JUNOS PRs get worked out we will revisit RSVP. In our environment, my only major qualm with LDP is the multicast replication issue at the PE ingress. But hopefully when/if Juniper supports mLDP, this should get resolved. Regarding BFD echo mode, I am curious to know if others have operational experience with it (on non-Junos gear since Juniper does not yet support it). Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Optimal BFD settings for BGP-signaled VPLS?
I am trying to determine the optimal Bidirectional Forwarding Detection (BFD) settings for BGP auto-discovery and layer-2 signaling in a VPLS application. To simplify things, assume I am running LDP for building dynamic-only LSPs, as opposed to RSVP. Assume I am running IS-IS as the IGP with BFD enabled on that, too, interconnecting all of the P and PE routers in the MPLS cloud. I am following the Juniper recommendation of 300 ms mininum interval with 3 misses before calling a BFD down event. The network design has a small set of core routers, each one of these routers serves as a BGP route reflector. All of the core routers have inter-meshed connections. Each core router is only one hop away from the other. On the periphery, I have perhaps dozens of distribution routers. Each distribution router is directly connected to two or more core routers. Each distribution router has a BGP session to these core routers; therefore, each distribution router is connected to two route reflectors for redundancy. Given that above, what type of minimum interval BFD setting and miss count would you configure? I want to try to get to a sub-second convergence during node/link failure, but I do not want to tune BFD too tight and potentially introduce unecessary flapping. I am willing to suffer some sporadic loss to the layer-2 connectivity within the VPLS cloud in the event of a catastrophe, etc, for a few seconds, but I don't want to unnecessarily tear down BGP sessions and wait some 20 to 60 seconds or so until BGP rebuilds and redistributes L2 information. For some time now, I have been playing with 3000 ms interval with 3 misses (that's 9 seconds) as what I thought was a conservative estimate. However, I have run into cases where there has been enough router churn for various reasons to uneccesarily trip a BFD down event. My hunch is that this router churn is due to buggy JUNOS code, but I don't have proof of that yet. Nevertheless, I want the BGP infrastructure to stay solid and ride through transient events in a redundant network. Are there any factors that I am missing or not thinking thoroughly enough about when considering optimal BFD settings? Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] SNMP polling issue MX
It looks like there were possibly multiple mib2d process bugs. In our case, restarting mib2d did not always resolve the issue. The good news is that it does look like 10.2R3 fixes the mib2d issues we were experiencing. We've been running 10.2R3 (or a derivative) for several weeks and SNMP appears to be stable. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 On Sunday, Richard said: On Sun, Jan 09, 2011 at 04:14:04PM +0300, Tarique A. Nalkhande - BMC wrote: All, Even we faced a similar problem on our MX's running 10.2R3. Further findings revealed memory leak bug for mib2d process.. restarting mib2d fixed it. Juniper is probably tracking it through some internal PR, the committed release is 10.2R3 which doesn't look likely. Hrmm supposedly the mib2d memory leak is fixed was 10.2R3, but we never actually tested it, we just skipped straight ahead to 10.3R2 on new deployments (as there were many other SNMP bugs still not fixed in 10.2 at the time). A quick and dirty workaround for the memory leak issue is to periodically restart the mib2d process, which you can do with an event script like so: event-options { generate-event { /* Adjust ttimer as necessary based on memory consumption */ restart-mib2d time-interval 604800; } policy restart-mib2d { events restart-mib2d; /* Adjust this too, to something slightly less than above */ within 60 { not events restart-mib2d; } then { event-script restart-mib2d; } } } /var/db/scripts/op/restart-mib2d.slax: version 1.0; ns junos = http://xml.juniper.net/junos/*/junos;; ns xnm = http://xml.juniper.net/xnm/1.1/xnm;; ns jcs = http://xml.juniper.net/junos/commit-scripts/1.0;; import ../import/junos.xsl; match / { op-script-results { var $restart-mib2d = { command restart mib-process gracefully; } var $result = jcs:invoke($restart-mib2d); } } -- Richard A Steenbergen r...@e-gerbil.net http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC) ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Joining OPSF IS-IS areas via 2 ABRs
On Sat, 9 Oct 2010, Smith W. Stacy wrote: I think you misunderstood. My example in no way requires virtual/logical routers. I simply used them because I only had access to a single physical router and wanted to create a more complex topology to verify the solution to your redistribution question. The only configuration necessary is on abr1 and abr2. The other routers in my topology (ospf1, ospf2, isis1, and isis2) were simply to verify the redistribution on abr1 and abr2 was working correctly. ospf1 and ospf2 are OSPF-only speaking routers. isis1 and isis2 are ISIS-only speaking routers. --Stacy Stacy, Oh, I understood you completely. In fact, you didn't know it, but the idea of virtual/logical routers is a more elegant solution in my case. In my case, abr1 and isis1 are but one physical router -- same for abr2 and isis2. My only problem is the issue of stealing bandwidth from the MX PFEs to support 10Gig logical tunnels. On the isis-only speaking routers (isis1 and isis2), getting transit traffic to go where I want it to go via abr1 or abr2 is easy. The problem, for example, is with traffic coming in from some *other* isis-only router, isis3, via abr2. In this case, physically isis3 must pass traffic through abr2 to get to abr1. I want that traffic to transit through abr1-ospf1 to the OSPF world, but since ospf internal routes have a better preference over isis routes, it will take the abr2-ospf2 path when it hits abr2, even though the metric from isis is engineered to be better going through the isis domain via abr1. In other words, dropping in a separate abr1 and abr2 router between the ospf and isis worlds is a lot cleaner -- but it costs me something in terms of bandwidth resources to support the logical tunnels. Does that clarify my challenge a bit? P.S. I just wish the vendor of my OSPF-only routers would support ISIS in a virtual routing environment. Life would be simpler :-) Clarke ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Joining OPSF IS-IS areas via 2 ABRs
Smith W. Stacy says Hi Clarke, I believe I have an answer for you... Smith: I really want to thank you for the full diagram and thoughtful config. That's a great solution. Your use of logical tunnel interfaces to bring the different logical routers together is a clean way to do it, and it stands by itself. My only caveat is that I was hoping to find a way to solve this by combining abr1 and isis1 into one physical router and abr2 and isis2 into a separate physical router. True, I could follow your config and have two separate routing instances and/or tru-blue logical routers on these physical routers and accomplish what I need to do. However, please correct me if I am wrong, but the main drawback I see with the logical tunnel approach to this problem -- at least for the MX platform that I am using -- is that logical tunnels consume PFE resources; i.e. the need to include a tunnel-services statement under the edit chassis fpc num pic num stanza. Amending your diagram, I was looking at something like this: +-+ +-+ |ospf1| .1 .2 |ospf2| | 10.0.0.1 +---+ 10.0.0.2 | | | 10.0.1.0/30 | | +--+--+ +--+--+ .1 | .1 | | 10.0.2.0/30 | 10.0.3.0/30 .2 | .2 | +--+--+ +--+--+ |abr1 | 192.168.3.0/30 |abr2 | | 192.168.0.1 +---+ 192.168.0.2 | | | .1.2 | | +--+--+ ISISISIS +--+--+ | | | | v v other isis routersother isis routers Unfortunately, this makes the configuration of abr1 2 very complex and problematic. For example, even though I could play tricks with metrics, abr1 and abr2 would both have OSPF routes to 10.0.1.0/30. So, if I gave the path through abr1-ospf1 the better metric, but then had a packet transiting abr2 towards 10.0.1.0/30, it would most likely take the path from abr2 via ospf2 --- despite the better metric via abr1-ospf1. For example: On abr1: 10.0.1.0/30 *[OSPF/10] 1w5d 07:24:08, metric 20 to 10.0.2.1 via xe-6/2/0.130 On abr2: 10.0.1.0/30 *[OSPF/10] 1w5d 07:24:18, metric 50 to 10.0.3.1 via xe-6/2/0.130 [IS-IS/18] 7w5d 07:24:18, metric 30, tag 80 to 192.168.3.1 via xe-11/0/0.130 So I was wondering about using route preference to solve this on abr2; i.e. increasing the OSPF internal route preference value to make the IS-IS learned route via abr1 preferred for 10.0.1.0/30. I was looking at this until I realized that forcing a change in route preference applies to all routes from that protocol. There did not seem to be way to use policy to force route preference for a prefix-list within this context (though I could be missing something). Also, there could be other scary things here I am not considering when playing with route preference. Any other thoughts on problem? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Joining OPSF IS-IS areas via 2 ABRs
I have an ISIS/OSPF area joining problem that I am trying to solve, wondering if JUNOS policy can help. Let's say that I have a bunch of routers in an IS-IS Level2 domain. I also have a few routers in an OSPF area (area 0.0.0.0). The two areas are joined via two Area Border Routers (ABR1 and ABR2). Eventually, I need to bring the routers in the OSPF area into the IS-IS L2 domain. Unfortunately, the current hardware does not support IS-IS on those routers. So I have to live with these OSPF-only routers temporarily. OSPF area backbone 0 X-X (these routers only support OSPF) | | | | X ABR1 ABR2 X (these routers support OSPF IS-IS) | | | |\ / V | \/ X \-/ IS-IS L2 domain I also have a requirement to pass traffic for *some* routes between these areas in an active-passive type of scenario; i.e. if ABR1 is available, force all of the traffic for these routes through ABR1, but do not allow traffic between areas for these routes via ABR2. If something bad happens to ABR1, then and only then will ABR2 act on the transit router for this inter-area traffic. Any transitions between ABR1 and ABR2 need to happen automagically; i.e without manual intervention, and preferably without any static routes. Currently, I can configure ABR1 and ABR2 to both act as active transit routers between the two areas. However, I haven't been able to figure out how to make this an active/passive arrangement for particular routes. I have explored various tweaks involving IGP metrics and route preferences (administrative distance in Cisco-speak), but I do not have a satisfactory solution yet. Does the very nature of these IGP protocols make it impossible to solve this problem, or can JUNOS policy on the ABRs help here? Any thoughts on how to arrive at a solution? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Joining OPSF IS-IS areas via 2 ABRs
Offlist it was suggested to me that my typology is not very clear. Perhaps a config snippet will help. Essentially, my two ABRs have one foot in the OSPF world and the other foot in the IS-IS world. If needed, I could run OSPF directly between the two ABRs. Both ABRs have direct connections to each other. Here is a sample config snippet from ABR1. The long term goal is to convert these OPSF only routers to IS-IS routers and bring them into the ISIS domain: [edit routing-instances Test] ABR1# show instance-type vrf; interface xe-6/2/0.130; interface xe-11/0/0.130; interface xe-10/0/0.130; interface lo0.130; route-distinguisher 192.168.0.7:130; vrf-import [ testin ]; vrf-export [ testout ]; vrf-table-label; routing-options { router-id 192.168.130.7; auto-export; } protocols { ospf { reference-bandwidth 100g; area 0.0.0.0 { interface lo0.130 { passive; } interface xe-6/2/0.130 { description Facing OSPF area - no IS-IS; } } } isis { level 2 wide-metrics-only; level 1 { disable; wide-metrics-only; } interface xe-11/0/0.130 { description Facing ABR2; } interface xe-10/0/0.130 { description Facing another IS-IS router; } interface lo0.130 { passive; } } } Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] GRE on MX platform without Multiservices DPC?
Does anyone know how to configure a GRE tunnel on the MX platform withOUT a Multiservices DPC? I *thought* that you can somehow steal resources from a line card to create a GRE tunnel on the MX, but I have not found an example of how to do that. I do not need much bandwidth, so the Multiservices DPC is just overkill for this application. I can create a gr interface in the config, but after commiting the config the gr interface does not show up: [edit interface gr-1/0/1] unit 0 { tunnel { source 192.168.0.1; destination 192.168.1.1;; } family inet { address 10.0.0.1/30; } } show interfaces gr-1/0/1 error: device gr-1/0/1 not found I know that you can build a logical tunnel to create connections within a router, but I need a GRE tunnel to connect to an external router. Are the methods similar? Any ideas? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] IS-IS database leaking across virtual routers?
Just to put a little closure on this topic from two months ago for the archives: The issue I was having was NOT due to IS-IS database leaking across VRFs. I had some routing policy problems due to configuration errors. However, the fact that Juniper will only assign one IS-IS hostname per router regardless of the number of VRFs is a convenient red herring. Juniper is essentially overwriting the TLV 137 information within the router database everytime a TLV 137 LSP is received from a neighbor on a different VRF. This is very annoying. I did compare Juniper's IS-IS VRF implementation with Cisco's, and Cisco does not have this problem. Cisco will assign the same IS-IS hostname across multiple VRFs without causing any confusion. Perhaps Juniper can learn something from their primary competitor :-) Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 On Thu, 17 Jun 2010, Stefan Fouant wrote: -Original Message- From: juniper-nsp-boun...@puck.nether.net [mailto:juniper-nsp- boun...@puck.nether.net] On Behalf Of Clarke Morledge Sent: Tuesday, June 15, 2010 5:31 PM To: Alan Gravett Cc: juniper-nsp@puck.nether.net Subject: Re: [j-nsp] IS-IS database leaking across virtual routers? Alan, Actually, I did implement your workaround before with the static host mapping. But that is rather cosmetic when compared to something like the overload bit. In theory (or at least, in *my* theory), setting the IS-IS overload bit in one virtual routing instance should not interfere with IS-IS in another virtual routing instance. Unfortunately, the observed behavior on the MX platform suggests some form of leaking. I'm just not entirely convinced now that a virtual router really means a separate link-state database per virtual router. Within this context, a virtual router should behave just like a physical router --- or like a logical router, for that matter. Am I mistaken here? Hey Clarke, Sorry, I'm just getting around to reading this now. I would say you are correct in your understanding of the way that VRs are supposed to work - routes/TLVs/etc. in one VR should not be leaking into the other. I'm curious, how are you mapping the traffic into their respective VRs? Are these separate and distinct physical interfaces which are bound to their respect VRs or are you using some form of VLAN tagging and mapping unique VLANs into a given VR? Is there any chance you have any type of rib-groups or some other type of vrf-import/export policy in place that might be causing some unintended behavior? Care to share some of your configuration? All the best, Stefan Fouant, CISSP, JNCIEx2 www.shortestpathfirst.net GPG Key ID: 0xB5E3803D ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] My strained affection for fxp0
I know we had a thread on this a month ago: http://www.mail-archive.com/juniper-nsp@puck.nether.net/msg09804.html but I wanted to explore an idea on how to handle the troubles behind managing fxp0. I was able to determine that even though fxp0 is supposed to only handle out-of-band traffic to/from the RE, it will in fact forward transit traffic through fxp0 if a particular route exists both on the fxp0 side of the world and everywhere else, as in the following example: 192.168.1.0/24 *[Static/5] 3d 03:27:05 to 192.168.0.1 via fxp0.0 [Static/6] 3d 03:55:58 to 192.168.2.5 via xe-10/0/0.0 Add my name to the list for wanting the ability to put fxp0 in a separate VR!! In the meantime My solution to the problem was just not to use the same route on both the in-band and out-of-band sides, and simply do a NAT trick on a different router on the out-of-band side of the network. It works, but it just seems unnecessarily complex and ugly to me. I was wondering if there was any way to do some sort of policy-based routing such that any packet generated from the RE towards a particular route could get forwarded out a different interface than what is in the routing table. For example, your route normally lives in the in-band world, but a packet to that route from RE would go out of fxp0. Unfortunately, I haven't figured out a way that this can be done within the Junos architecture (at least on the MX platform). Has anyone been able to come up with such a PBR-type solution? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] SRX3400/3600 Stabie Code Recommendations?
There have been a number of views expressed recently on the list regarding the SRX and Junos code buggines and instability. I am wondering if a lot of the issues are related to specific platforms. Specifically, I am curious to know about the reliability of some of the smaller data center models, namely the 3400 and 3600. Are there any stable code recommendations to make regarding these data center platforms? Are there any particularly noteworthy code revs that one should stay away from? I tested the 3400 with flavors of 10.0 for an evaluation recently and it performed pretty well, but I did not bang on it as much as I wanted. I find it interesting to say the least that Juniper officially does not recommend (or even make available for the lower end) the latest 10.2R1 for any of their SRX products, including the higher end models. And it has been two months now and no maintenance release. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] IS-IS database leaking across virtual routers?
Alan, Actually, I did implement your workaround before with the static host mapping. But that is rather cosmetic when compared to something like the overload bit. In theory (or at least, in *my* theory), setting the IS-IS overload bit in one virtual routing instance should not interfere with IS-IS in another virtual routing instance. Unfortunately, the observed behavior on the MX platform suggests some form of leaking. I'm just not entirely convinced now that a virtual router really means a separate link-state database per virtual router. Within this context, a virtual router should behave just like a physical router --- or like a logical router, for that matter. Am I mistaken here? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 On Sun, 13 Jun 2010, Alan Gravett wrote: Use static host mapping for each VR/lo0.x to avoid confusion set system static-host-mapping R1 sysid 0100.0011.0001 and so on... On Fri, Jun 11, 2010 at 7:23 PM, Clarke Morledge chm...@wm.edu wrote: I am trying to figure out how Junos handles IS-IS in an environment with virtual routers (VRs). I see weird behvavior with some MX routers running 9.6 where some TLV information and some other details are bleeding between different VRs when IS-IS is the routing protocol in those VRs. By default, routing information in one VR should always remain separate from routing information in a different VR. With our MX infrastructure, we are stacking a bunch of different network topologies on top of one another using VRs to keep the routing tables separate. I would assume that if you run IS-IS in each VR that you will have a separate IS-IS database per VR, analogous to having a separate routing table per VR. But I am having my doubts. SNIP---SNIP- ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] IS-IS database leaking across virtual routers?
) is but one example. I've seen other weird behavior that has more detrimental impact. For example, I can set the overload bit on a single VR and it will somehow impact IS-IS within a completely separate VR running on the same router. But the leaking problem doesn't impact every element in the IS-IS database or databases. For example, I have not seen any problem with any routing information such is IP prefixes being leaked from VR to another. Thankfully, the raw link-state details of each IP prefix seems to respect the VR boundaries. It is typically only a problem with some of the bells and whistles of IS-IS, such the Hostname TVL, the Overload bit, and perhaps a few other things. Does anyone have any idea as to whether or not this is intended behavior by Junos (I hope not), or is this a bug in the virtual routing implementation? Thanks. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] EX4200 arp-inspection and examine-dhcp questions
On Thu, Apr 1, 2010 at 9:29 AM, Kessler, Ben ben.kess...@zenetra.com wrote: Hi Gang - We're implementing some EX4200s for a customer. ?A problem that we're having though is that devices configured with static IP addresses are not able to communicate on the network (DHCP hosts are fine). We believe that we've tracked this down to the arp-inspection and examine-dhcp options that are added by default when various port profiles are selected in the WebUI. I was wondering what others are doing to work-around this behavior. ?We're running 9.5R4.3 on the switches (code version that was recommended by our Juniper SE) Thanks, Ben Ben, My understanding is that you will need to configure static IP addresses for DHCP bindings on the access ports: http://www.juniper.net/techpubs/en_US/junos9.6/topics/task/configuration/port-security-static-ip-address-cli.html I haven't tested it myself, so I would be very curious to know what type of mileage you get on this. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] OSPF LFA and LDP LSPs
Serge, Part of what you wrote included this: Now I turn on OSPF LFA link-protection on the links and re-run the same tests: --- se36...@pe1-stjhlab-re0 show route 10.10.80.2 logical-system PE10 detail inet.0: 34 destinations, 34 routes (34 active, 0 holddown, 0 hidden) 10.10.80.2/32 (1 entry, 1 announced) *OSPF Preference: 10 Next hop type: Router Next-hop reference count: 26 Next hop: 10.10.81.10 via xe-0/3/0.0 weight 0x1, selected Next hop: 10.10.81.23 via ge-1/3/3.0 weight 0xf000 - Huh??? State: Active Int Local AS: 855 Age: 4 Metric: 2 Area: 0.0.0.0 Task: OSPF Announcement bits (3): 2-LDP 3-KRT 5-Resolve tree 2 AS path: I inet.3: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden) 10.10.80.2/32 (1 entry, 1 announced) State: FlashAll *LDPPreference: 9 Next hop type: Router Next-hop reference count: 3 Next hop: 10.10.81.10 via xe-0/3/0.0 weight 0x1, selected Label operation: Push 299776 Next hop: 10.10.81.23 via ge-1/3/3.0 weight 0xf000 - Huh??? Label operation: Push 299856 State: Active Int Local AS: 855 Age: 4 Metric: 1 Task: LDP Announcement bits (2): 2-Resolve tree 1 3-Resolve tree 2 AS path: I I can not speak to your traceroute issue, but my understanding is that the next-hop 10.10.81.23 references are the alternate paths that get put in your routing table by the LFA algorithm. These routes exist but only in stand-by mode. So if the 10.10.81.10 next-hop ever goes away, traffic can immediately use this stand-by routing entry to forward the traffic while OSPF is recalculating new routes under the covers. This is loosely analogous to how Detours work in RSVP/MPLS Fast ReRoute -- though, admittedly, Fast ReRoute is much more involved. Then, once the OSPF recalculations are done in LFA, the routing table is updated with a new primary routing entry and another stand-by entry. Therefore, LFA effectively doubles the size of your routing table to accommodate all of the stand-by routes. Unless, I'm missing something, that is at least my understanding of how LFA actually works --- or at least how it is supposed to work. In other words, per your routing table it is working as designed. However, this does not necessarily mean that OSPF LFA currently solves all of the problems with microloops in some topologies. If someone has a better explanation, I'd like to know, too. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] netflow v9 on a Juniper MX
My understanding is that new Trio card for the MX will allow you to do Netflow processing on the line card itself, without the need for a separate Multiservices DPC card. What I don't know is if there is some flavor of 10.x available where this is currently supported on these new cards. In other words, the hardware is there, but the software isn't ready yet. Does anyone know for sure? Of course, this doesn't really help you if you only have the DPCE 4x 10GE R cards. Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] The P2MP LSP story when using LDP for VPLS?
I am in the process of building a VPLS implementation using LDP to build the LSPs but using BGP to handle the L2 signaling: I know that there is a dynamic way to configure point-to-multipoint (P2MP) LSPs when using RSVP with the provider-tunnel keyword for each VPLS routing instance , but I don't quite understand how to do this with LDP. I'm getting a little lost when reading this: http://jnpr.net/techpubs/en_US/junos10.0/information-products/topic-collections/feature-guide/vpls-traffic-flooding-p2mp-lsp-solutions.html I see that there is some IETF draft work being done on P2MP for LDP: http://tools.ietf.org/html/draft-ietf-mpls-ldp-p2mp-08 Does this mean that dynamic P2MP LSPs for LDP with VPLS isn't available yet, or is there some other workaround? I'm having some difficulty trying to wrap my head around this. The ultimate purpose is to cut down on unnecessary packet replication due to broadcast, multicast, and unknown unicast within VPLS. I would rather not use RSVP since it is more complex to configure than LDP, but perhaps LDP isn't ready for P2MP for primetime? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] VPLS Multi-homing primary site link failure failover
Sean says: Hi Clarke .. what's the config ? Are you using irb interfaces ? If no irb, and the PE1 is directly connected to CE1, on interface down the VPLS primary should switchover. cheers Sean Sean, Yes, I am using irb interfaces. Since they are not listed under the site stanza, I did not think the irb would make any difference. Obviously, I was mistaken. Is there any way to force the failover mechanism to work when configured for irb? Or is there a workaround to consider? I am trying to terminate the L3 side of the VPLS instance with VRRP (defined in a VRF instance that include the same irb) on the MX routers that are configured to support a VPLS multihomed site. Can I still do this, or does the irb have to be on a separate physical MX router? Clarke Morledge College of William and Mary Information Technology - Network Engineering Jones Hall (Room 18) Williamsburg VA 23187 ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp