Re: [j-nsp] Experience with J series
2009/9/24 Chris Kawchuk juniperd...@gmail.com Yep. 30 ACL's with no issues (assuming straightforward things). Full BGP Tables, OSPF area 0.0.0.0 inside, QoS, IPSEC. I'd warn you guys of running peers with full BGP on J series with 1 Gig of RAM. It was not a problem till 9.4. But since 9.4 JUNOS for J-series is flow based only thus fwwd daemon preallocates plenty of memory for stateful sessions tracking just like ScreenOS does. Even if you switch it to packet context. Here is some output from a J2350 runiing 9.6 in a lab enviroment. = p...@j2350 show system processes extensive [...] PID USERNAMETHR PRI NICE SIZERES STATETIME WCPU COMMAND 11 root 1 171 52 0K12K RUN1069.4 95.80% idle 778 root 1 960 482M 482M select 71.0H 0.98% fwdd [...] = 482MB ! 9.5R1 eats even a bit more (some 60 megs plus). I myself tried to run 2 peers with fullview on J2320 JUNOS 9.4/9.5 with 1 Gig and bumped into BGP session dropping with LowMem event. Moreover keep in mind that J2320/2350 are less valuable than SRX240 in price/performace terms. -- Regards, Pavel ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Netflow + OriginAS in logical systems
Hi Andree, Normally I would say you might be missing the routing-options route-record feature, give it a try. But the following page seems quite negative about it: http://www.juniper.net/techpubs/software/junos/junos94/swconfig-routing/overview_1.html Having the route-record feature under the [logical-systems routing-options] stanza would help select the right rpd from which data should be copied from in case of a logical system. Also, if my interpretation of the following page is correct, it makes a pretty bold statement speaking about restrictions of logical systems Generalized MPLS (GMPLS), IP Security (IPSec), point-to-multipoint label-switched paths (LSPs), port mirroring, and sampling are not supported: http://www.juniper.net/techpubs/en_US/junos9.6/information-products/topic-collections/feature-guide/logical-systems-overview-solutions.html If there is not a better answer, i can point you to a workaround in case you are in the need of an IP accounting solution: the pmacct project (free, open-source) recently integrated into a single daemon both a NetFlow collector and a Quagga-based BGP daemon: the idea would be you can let your logical system(s) send NetFLow data and iBGP peer with it; then stitching the two information together (NetFlow+BGP) is done at the collector (OK, with the secondary advantage of having readily available AS-PATH, Local Preference, MED, Communities, etc.). This was presented earlier in September 09 (by myself) at an UKNOF meeting; in case anybody reading is interested this is the link: http://www.pmacct.net/lucente_pmacct_uknof14.pdf Cheers, Paolo On Fri, Sep 25, 2009 at 06:52:49PM +0200, Andree Toonk wrote: Hi all, I'm trying to use cflow on our MX480s within a logical system but ran into an issue with AS resolution. I wonder if others have used cflow in a logical system and were able to get this working. The logical system has full BGP routing from 3 separate upstreams ISP's Exporting netflow works fine, however the AS resolution doesn't seem to work correct. All flows are reporting AS 0, except for those ASN's that are directly connected to the Master instance. So it seems that while the flows are coming from the logical-system TX, it tries to determine the ASns for the flows using the routing table in the master instance. Resulting in many flows with AS 0. Is any of you aware of a way I can use cflow in this logical-system, with proper AS resolution? Or is this just a limitation of sampling logical-systems? This is the configuration we used: In master: forwarding-options { sampling { input { family inet { rate 100; } } output { cflowd x.x.x.x port 23456; version 5; autonomous-system-type origin; } } } } firewall { filter all { term all { then { sample; accept; } } } } Then on the interface towards one of our upstreams, in logical system: interfaces { ge-0/1/0 { unit 0 { family inet { filter { input all; output all; } address x.x.x.x/30; } } } } Thanks, Andree ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Experience with J series
Hi Pavel, Thanks for your input. Based on factsheets the J series outperform BGP capabilities of the SRX series. The only out that outperform in SRX is the 650 which looks like a real good deal (thanks for pointing it out to me!). Nice weekend. - Gregory 2009/9/26 Pavel Lunin plu...@senetsy.ru I'd warn you guys of running peers with full BGP on J series with 1 Gig of RAM. It was not a problem till 9.4. But since 9.4 JUNOS for J-series is flow based only thus fwwd daemon preallocates plenty of memory for stateful sessions tracking just like ScreenOS does. Even if you switch it to packet context. I myself tried to run 2 peers with fullview on J2320 JUNOS 9.4/9.5 with 1 Gig and bumped into BGP session dropping with LowMem event. Moreover keep in mind that J2320/2350 are less valuable than SRX240 in price/performace terms. -- Regards, Pavel ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Experience with J series
if you running flow based JUNOS , you could try this knob to turn it into packet based mode: security { forwarding-options { family { mpls { mode packet-based; } } } } On Sat, Sep 26, 2009 at 7:09 PM, Pavel Lunin plu...@senetsy.ru wrote: 2009/9/24 Chris Kawchuk juniperd...@gmail.com Yep. 30 ACL's with no issues (assuming straightforward things). Full BGP Tables, OSPF area 0.0.0.0 inside, QoS, IPSEC. I'd warn you guys of running peers with full BGP on J series with 1 Gig of RAM. It was not a problem till 9.4. But since 9.4 JUNOS for J-series is flow based only thus fwwd daemon preallocates plenty of memory for stateful sessions tracking just like ScreenOS does. Even if you switch it to packet context. Here is some output from a J2350 runiing 9.6 in a lab enviroment. = p...@j2350 show system processes extensive [...] PID USERNAMETHR PRI NICE SIZERES STATETIME WCPU COMMAND 11 root 1 171 52 0K12K RUN1069.4 95.80% idle 778 root 1 960 482M 482M select 71.0H 0.98% fwdd [...] = 482MB ! 9.5R1 eats even a bit more (some 60 megs plus). I myself tried to run 2 peers with fullview on J2320 JUNOS 9.4/9.5 with 1 Gig and bumped into BGP session dropping with LowMem event. Moreover keep in mind that J2320/2350 are less valuable than SRX240 in price/performace terms. -- Regards, Pavel ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp -- BR! James Chen ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Experience with J series
Sorry to reply to myself, but I meant outperform J series, in the same factor-size and price range. 2009/9/26 Gregory Agerba gregory.age...@gmail.com Hi Pavel, Thanks for your input. Based on factsheets the J series outperform BGP capabilities of the SRX series. The only out that outperform in SRX is the 650 which looks like a real good deal (thanks for pointing it out to me!). Nice weekend. - Gregory ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Experience with J series
Hi 陈江, You're right, this should be almost always done if you run several external peers with fullview, but this code only switches the box into router context. It doesn't make fwdd to free the memory. The router I used to show the fwdd memory consumption is also given this piece of config. I heard some talks Juniper is going to deploy different memory allocation models based on the mode and even licenses (not sure whether they have much sense), but by now router context does not give you any additional free DRAM, fwdd still eats about 500 megs. In new versions of JUNOS for J/SRX idpd daemon is also consuming quite a lot of memory even if you do not need IDP. But there is no problem to turn it of with [edit system processes] hierarchy. So in some cases the best way will be just still use = 9.3 packet mode. -- Pavel 2009/9/26 陈江 iloveb...@gmail.com if you running flow based JUNOS , you could try this knob to turn it into packet based mode: security { forwarding-options { family { mpls { mode packet-based; } } } } ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Experience with J series
Ups, I really missed the number of BGP routes limitation for SRX240. Sorry. However 300K stated in the datasheet is not a hard limit for J series. It is only a number well known to be supported with no issues. I wonder if this is different for SRX. But anyway BGP RR license is only available for J and SRX650, so SRX100/210/240 do not support RR at all. BTW, running 2 peers with fullview needs at least twice of 300k in RIB. J series with JUNOS 9.5 is capable to load them all into RIB, but when it gets to calculating best paths and populating FIB (which is also stored in DRAM on J/SRX) the process can't get enough memory. Stripping off everything longer than, say, /21 saves the deal. But you'd rather not go there if you need to run full tables not only at the edge. Just use 9.3 packet mode. -- Pavel 2009/9/26 Gregory Agerba gregory.age...@gmail.com Hi Pavel, Thanks for your input. Based on factsheets the J series outperform BGP capabilities of the SRX series. The only out that outperform in SRX is the 650 which looks like a real good deal (thanks for pointing it out to me!). Nice weekend. - Gregory 2009/9/26 Pavel Lunin plu...@senetsy.ru I'd warn you guys of running peers with full BGP on J series with 1 Gig of RAM. It was not a problem till 9.4. But since 9.4 JUNOS for J-series is flow based only thus fwwd daemon preallocates plenty of memory for stateful sessions tracking just like ScreenOS does. Even if you switch it to packet context. I myself tried to run 2 peers with fullview on J2320 JUNOS 9.4/9.5 with 1 Gig and bumped into BGP session dropping with LowMem event. Moreover keep in mind that J2320/2350 are less valuable than SRX240 in price/performace terms. -- Regards, Pavel ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] EX4200 and Broadcom NICs on Linux Server
what kind of routing issue this is? -Original Message- From: juniper-nsp-boun...@puck.nether.net on behalf of Shane Ronan Sent: Sat 9/26/2009 23:54 To: juniper-nsp Subject: [j-nsp] EX4200 and Broadcom NICs on Linux Server Has any else experienced routing issues with Broadcom NICS on a Linux Server connected to an EX4200? Shane ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp