If you don't need the lossless queue then you can axe most of it. This is what I've done: set class-of-service shared-buffer ingress percent 100 set class-of-service shared-buffer ingress buffer-partition lossless percent 5 set class-of-service shared-buffer ingress buffer-partition lossy percent 95 set class-of-service shared-buffer ingress buffer-partition lossless-headroom percent 0 set class-of-service shared-buffer egress percent 100 set class-of-service shared-buffer egress buffer-partition lossless percent 5 set class-of-service shared-buffer egress buffer-partition lossy percent 90 set class-of-service shared-buffer egress buffer-partition multicast percent 5
> show class-of-service shared-buffer Ingress: Total Buffer : 12480.00 KB Dedicated Buffer : 2912.81 KB Shared Buffer : 9567.19 KB Lossless : 478.36 KB Lossless Headroom : 0.00 KB Lossy : 9088.83 KB Lossless Headroom Utilization: Node Device Total Used Free 0 0.00 KB 0.00 KB 0.00 KB Egress: Total Buffer : 12480.00 KB Dedicated Buffer : 3744.00 KB Shared Buffer : 8736.00 KB Lossless : 436.80 KB Multicast : 436.80 KB Lossy : 7862.40 KB You may be able to tweak even further for your own needs. On Wed, May 16, 2018 at 12:06 PM, Brian Rak <b...@gameservers.com> wrote: > We've been trying to track down why our 5100's are dropping traffic due to > lack of buffer space, even with very low link utilization. > > It seems like they're classifying all our traffic as best-effort: >> show interfaces xe-0/0/49:0 extensive > .... > Carrier transitions: 1, Errors: 0, Drops: 276796488, Collisions: 0, Aged > packets: 0, FIFO errors: 0, HS link CRC errors: 0, MTU errors: 0, Resource > errors: 0, Bucket drops: 0 > Egress queues: 12 supported, 5 in use > Queue counters: Queued packets Transmitted packets Dropped packets > 0 0 1876090180637 276796488 > 3 0 0 0 > 4 0 0 0 > 7 0 663877 0 > 8 0 0 0 > > Then, if we look at those queues: > >> show class-of-service forwarding-class > Forwarding class ID Queue Policing priority > No-Loss > best-effort 0 0 normal Disabled > fcoe 1 3 normal Enabled > no-loss 2 4 normal Enabled > network-control 3 7 normal Disabled > mcast 8 8 normal Disabled > > So whatever, we've got queues configured for traffic we never see. That's > not really a huge deal. But then: > >> show class-of-service shared-buffer egress > Egress: > Total Buffer : 12480.00 KB > Dedicated Buffer : 3744.00 KB > Shared Buffer : 8736.00 KB > Lossless : 4368.00 KB > Multicast : 1659.84 KB > Lossy : 2708.16 KB > > To me, this appears that the 5100's by default reserve about 70% of the > available shared port buffers for lossless+multicast traffic. The > documentation seems to back me up here: > > https://www.juniper.net/documentation/en_US/junos/topics/concept/cos-qfx-series-buffer-configuration-understanding.html#jd0e1441 > https://www.juniper.net/documentation/en_US/junos/topics/example/cos-shared-buffer-allocation-lossy-ucast-qfx-series-configuring.html > > Has anyone else encountered this? We're going to be adjusting the buffers > per the documentation, but I'd be really interested to hear if anyone has > hit this before. > > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp -- [stillwa...@gmail.com ~]$ cat .signature cat: .signature: No such file or directory [stillwa...@gmail.com ~]$ _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp