Disabling Basic-Table certainly bought you some time. Agree that it still does not look good. I suspect that you are running into a software issue. 11.4 is no longer a supported version, 12.3 is the minimum supported today, with 13.3R6 as the recommended version. Is it possible for you to upgrade?
Best Regards, -Phil > On Jul 21, 2015, at 7:23 PM, Jeff Meyers <jeff.mey...@gmx.net> wrote: > > Hi Phil, > > sure: > > > {master} > jeff@cr0> show configuration | display set | match rpf-check > > {master} > nico@FRA4.cr0> show version > Hostname: cr0 > Model: mx480 > JUNOS Base OS boot [11.4R9.4] > JUNOS Base OS Software Suite [11.4R9.4] > JUNOS Kernel Software Suite [11.4R9.4] > JUNOS Crypto Software Suite [11.4R9.4] > JUNOS Packet Forwarding Engine Support (M/T Common) [11.4R9.4] > JUNOS Packet Forwarding Engine Support (MX Common) [11.4R9.4] > JUNOS Online Documentation [11.4R9.4] > JUNOS Voice Services Container package [11.4R9.4] > JUNOS Border Gateway Function package [11.4R9.4] > JUNOS Services AACL Container package [11.4R9.4] > JUNOS Services LL-PDF Container package [11.4R9.4] > JUNOS Services PTSP Container package [11.4R9.4] > JUNOS Services Stateful Firewall [11.4R9.4] > JUNOS Services NAT [11.4R9.4] > JUNOS Services Application Level Gateways [11.4R9.4] > JUNOS Services Captive Portal and Content Delivery Container package > [11.4R9.4] > JUNOS Services RPM [11.4R9.4] > JUNOS Services HTTP Content Management package [11.4R9.4] > JUNOS AppId Services [11.4R9.4] > JUNOS IDP Services [11.4R9.4] > JUNOS Services Crypto [11.4R9.4] > JUNOS Services SSL [11.4R9.4] > JUNOS Services IPSec [11.4R9.4] > JUNOS Runtime Software Suite [11.4R9.4] > JUNOS Routing Software Suite [11.4R9.4] > > {master} > nico@FRA4.cr0> show route summary > Autonomous system number: XXXXX > Router ID: A.B.C.D > > inet.0: 546231 destinations, 1747898 routes (545029 active, 11 holddown, 2994 > hidden) > Direct: 1143 routes, 1140 active > Local: 1144 routes, 1144 active > OSPF: 81 routes, 18 active > BGP: 1745429 routes, 542631 active > Static: 100 routes, 95 active > IGMP: 1 routes, 1 active > > Basic-Table.inet.0: 212783 destinations, 215070 routes (212778 active, 5 > holddown, 0 hidden) > Direct: 2283 routes, 1140 active > Local: 2288 routes, 1144 active > OSPF: 17 routes, 17 active > BGP: 210387 routes, 210382 active > Static: 95 routes, 95 active > > inet6.0: 23331 destinations, 39242 routes (23330 active, 1 holddown, 113 > hidden) > Direct: 451 routes, 368 active > Local: 373 routes, 373 active > OSPF3: 9 routes, 9 active > BGP: 38399 routes, 22571 active > Static: 10 routes, 9 active > > Basic-Table.inet6.0: 12295 destinations, 12295 routes (12292 active, 3 > holddown, 0 hidden) > Direct: 366 routes, 366 active > Local: 373 routes, 373 active > OSPF3: 8 routes, 8 active > BGP: 11539 routes, 11536 active > Static: 9 routes, 9 active > > {master} > > > I actually thought this "Basic-Table" was inactive. It is not so I'm going to > deactive it now. Since it was holding > 200k routes, this is for sure a lot. > Doing that made the syslog message disappear but it didn't actually free up > as much as I was hoping for: > > GOT: Jtree memory segment 0 (Context: 0x44976cc8) > GOT: ------------------------------------------- > GOT: Memory Statistics: > GOT: 16777216 bytes total > GOT: 14613176 bytes used > GOT: 2145824 bytes available (865792 bytes from free pages) > GOT: 3024 bytes wasted > GOT: 15192 bytes unusable > GOT: 32768 pages total > GOT: 6338 pages used (2568 pages used in page alloc) > GOT: 24739 pages partially used > GOT: 1691 pages free (max contiguous = 380) > > > Still doesn't look to glorious, right? > > > Best, > Jeff > > > Am 22.07.2015 um 01:06 schrieb Phil Rosenthal: >> Can you paste the output of these commands: >> show conf | display set | match rpf-check >> show ver >> show route sum >> >> DPC should have enough memory for ~1M FIB. This can get divided in half if >> you are using RPF. If you have multiple routing instances, this also can >> contribute to the problem. >> >> Best Regards, >> -Phil Rosenthal >>> On Jul 21, 2015, at 6:56 PM, Jeff Meyers <jeff.mey...@gmx.net> wrote: >>> >>> Hello list, >>> >>> we seem to be running into limits with a MX480 with RE-2000 and 2x >>> DPCE-4XGE-R since we are seeing these new messages in the syslog: >>> >>> >>> Jul 22 00:50:36 cr0 fpc0 RSMON: Resource Category:jtree >>> Instance:jtree0-seg0 Type:free-dwords Available:83072 is less than LWM >>> limit:104857, rsmon_syslog_limit() >>> Jul 22 00:50:36 cr0 fpc0 RSMON: Resource Category:jtree >>> Instance:jtree1-seg0 Type:free-pages Available:1326 is less than LWM >>> limit:1638, rsmon_syslog_limit() >>> Jul 22 00:50:36 cr0 fpc1 RSMON: Resource Category:jtree >>> Instance:jtree0-seg0 Type:free-pages Available:1316 is less than LWM >>> limit:1638, rsmon_syslog_limit() >>> Jul 22 00:50:37 cr0 fpc1 RSMON: Resource Category:jtree >>> Instance:jtree0-seg0 Type:free-dwords Available:84224 is less than LWM >>> limit:104857, rsmon_syslog_limit() >>> Jul 22 00:50:37 cr0 fpc0 RSMON: Resource Category:jtree >>> Instance:jtree1-seg0 Type:free-dwords Available:84864 is less than LWM >>> limit:104857, rsmon_syslog_limit() >>> >>> >>> Here is some more output from the FPC: >>> >>> >>> jeff@cr0> request pfe execute target fpc0 command "show rsmon" >>> SENT: Ukern command: show rsmon >>> GOT: >>> GOT: category instance type total lwm_limit hwm_limit free >>> GOT: -------- ----------- ------------ -------- --------- --------- -------- >>> GOT: jtree jtree0-seg0 free-pages 32768 1638 4915 1245 >>> GOT: jtree jtree0-seg0 free-dwords 2097152 104857 314572 79680 >>> GOT: jtree jtree0-seg1 free-pages 32768 1638 4915 22675 >>> GOT: jtree jtree0-seg1 free-dwords 2097152 104857 314572 1451200 >>> GOT: jtree jtree1-seg0 free-pages 32768 1638 4915 1267 >>> GOT: jtree jtree1-seg0 free-dwords 2097152 104857 314572 81088 >>> GOT: jtree jtree1-seg1 free-pages 32768 1638 4915 23743 >>> GOT: jtree jtree1-seg1 free-dwords 2097152 104857 314572 1519552 >>> GOT: jtree jtree2-seg0 free-pages 32768 1638 4915 1266 >>> GOT: jtree jtree2-seg0 free-dwords 2097152 104857 314572 81024 >>> GOT: jtree jtree2-seg1 free-pages 32768 1638 4915 23732 >>> GOT: jtree jtree2-seg1 free-dwords 2097152 104857 314572 1518848 >>> GOT: jtree jtree3-seg0 free-pages 32768 1638 4915 1232 >>> GOT: jtree jtree3-seg0 free-dwords 2097152 104857 314572 78848 >>> GOT: jtree jtree3-seg1 free-pages 32768 1638 4915 23731 >>> GOT: jtree jtree3-seg1 free-dwords 2097152 104857 314572 1518784 >>> LOCAL: End of file >>> >>> {master} >>> jeff@cr0> request pfe execute target fpc0 command "show jtree 0 memory >>> extensive" >>> SENT: Ukern command: show jtree 0 memory extensive >>> GOT: >>> GOT: Jtree memory segment 0 (Context: 0x44976cc8) >>> GOT: ------------------------------------------- >>> GOT: Memory Statistics: >>> GOT: 16777216 bytes total >>> GOT: 15299920 bytes used >>> GOT: 1459080 bytes available (660480 bytes from free pages) >>> GOT: 3024 bytes wasted >>> GOT: 15192 bytes unusable >>> GOT: 32768 pages total >>> GOT: 26528 pages used (2568 pages used in page alloc) >>> GOT: 4950 pages partially used >>> GOT: 1290 pages free (max contiguous = 373) >>> GOT: >>> GOT: Partially Filled Pages (In bytes):- >>> GOT: Unit Avail Overhead >>> GOT: 8 674344 0 >>> GOT: 16 107840 0 >>> GOT: 24 13296 4792 >>> GOT: 32 288 0 >>> GOT: 48 2832 10400 >>> GOT: >>> GOT: Free Page Lists(Pg Size = 512 bytes):- >>> GOT: Page Bucket Avail(Bytes) >>> GOT: 1-1 140288 >>> GOT: 2-2 112640 >>> GOT: 3-3 76800 >>> GOT: 4-4 49152 >>> GOT: 5-5 7680 >>> GOT: 6-6 15360 >>> GOT: 7-7 25088 >>> GOT: 8-8 8192 >>> GOT: 9-11 5632 >>> GOT: 12-17 6656 >>> GOT: 18-26 22016 >>> GOT: 27-32768 190976 >>> GOT: >>> GOT: Fragmentation Index = 0.869, (largest free = 190976) >>> GOT: Counters: >>> GOT: 465261655 allocs (0 failed) >>> GOT: 0 releases(partial 0) >>> GOT: 463785484 frees >>> GOT: 0 holds >>> GOT: 9 pending frees(pending bytes 88) >>> GOT: 0 pending forced >>> GOT: 0 times free blocked >>> GOT: 0 sync writes >>> GOT: Error Counters:- >>> GOT: 0 bad params >>> GOT: 0 failed frees >>> GOT: 0 bad cookie >>> GOT: >>> GOT: Jtree memory segment 1 (Context: 0x449f87e8) >>> GOT: ------------------------------------------- >>> GOT: Memory Statistics: >>> GOT: 16777216 bytes total >>> GOT: 5123760 bytes used >>> GOT: 11650408 bytes available (11609600 bytes from free pages) >>> GOT: 2704 bytes wasted >>> GOT: 344 bytes unusable >>> GOT: 32768 pages total >>> GOT: 9912 pages used (8976 pages used in page alloc) >>> GOT: 181 pages partially used >>> GOT: 22675 pages free (max contiguous = 22672) >>> GOT: >>> GOT: Partially Filled Pages (In bytes):- >>> GOT: Unit Avail Overhead >>> GOT: 8 25352 0 >>> GOT: 16 11072 0 >>> GOT: 32 384 0 >>> GOT: 40 440 32 >>> GOT: 48 1056 256 >>> GOT: 56 448 8 >>> GOT: 64 448 0 >>> GOT: 72 360 8 >>> GOT: 80 400 32 >>> GOT: 168 336 16 >>> GOT: 256 512 32 >>> GOT: >>> GOT: Free Page Lists(Pg Size = 512 bytes):- >>> GOT: Page Bucket Avail(Bytes) >>> GOT: 3-3 1536 >>> GOT: 27-32768 11608064 >>> GOT: >>> GOT: Fragmentation Index = 0.004, (largest free = 11608064) >>> GOT: Counters: >>> GOT: 29941803 allocs (0 failed) >>> GOT: 0 releases(partial 0) >>> GOT: 29888786 frees >>> GOT: 0 holds >>> GOT: 1 pending frees(pending bytes 8) >>> GOT: 0 pending forced >>> GOT: 0 times free blocked >>> GOT: 0 sync writes >>> GOT: Error Counters:- >>> GOT: 0 bad params >>> GOT: 0 failed frees >>> GOT: 0 bad cookie >>> GOT: >>> GOT: >>> GOT: Context: 0x4296cc58 >>> LOCAL: End of file >>> >>> >>> I furthermore found this article on Juniper KB: >>> >>> >>> http://kb.juniper.net/InfoCenter/index?page=content&id=KB19015&actp=search&viewlocale=en_US&searchid=1236602855555 >>> >>> >>> Is it really possible the MX480 cannot handle more than roughly 500k routes >>> in the FPC? What are my options here? Do I have to upgrade the SCB + get >>> some new interfaces modules in order to keep this box running? >>> >>> What are my options to get some time? Where is the right knob to aggregate >>> routes (if that's a good idea) to - let's say - /23? >>> >>> >>> Thanks in advance! >>> >>> >>> >>> Jeff >>> _______________________________________________ >>> juniper-nsp mailing list juniper-nsp@puck.nether.net >>> https://puck.nether.net/mailman/listinfo/juniper-nsp >> > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp