Well, we have applied the scheduler map to the interface but we're still seeing 100% drops in queue 1, which is where CS2 is hitting. It is literally dropping every packet in queue 1, but I don't understand enough about what I'm seeing to understand why.
Queue counters: Queued packets Transmitted packets Dropped packets 0 HSD, BASIC-D 4775 4775 0 1 MNGMT, VOIP- 4913 0 4913 2 UET, CDN, VO 0 0 0 3 VOIP-BEARER, 81 81 0 show configuration class-of-service interfaces ge-2/2/0 apply-groups CRAN-P2P-COS; show configuration groups CRAN-P2P-COS class-of-service { interfaces { <*> { scheduler-map QOS-MAP; unit 0 { classifiers { dscp DSCPV4-CLASSIFIER; dscp-ipv6 DSCPV6-CLASSIFIER; exp EXP-CLASSIFIER; } rewrite-rules { dscp DSCPV4-REWRITE; dscp-ipv6 DSCPV6-REWRITE; exp EXP-REWRITE; show class-of-service interface ge-2/2/0 Physical interface: ge-2/2/0, Index: 270 Queues supported: 8, Queues in use: 4 Scheduler map: QOS-MAP, Index: 26435 Logical interface: ge-2/2/0.0, Index: 205 Object Name Type Index Rewrite DSCPV4-REWRITE dscp 39698 Rewrite DSCPV6-REWRITE dscp-ipv6 6938 Classifier DSCPV4-CLASSIFIER dscp 7318 Classifier DSCPV6-CLASSIFIER dscp-ipv6 40094 show class-of-service scheduler-map QOS-MAP Scheduler map: QOS-MAP, Index: 26435 Scheduler: TRAFFIC-CLASS-1-SCHEDULER, Forwarding class: BASIC-DATA, Index: 34325 Transmit rate: 20 percent, Rate Limit: none, Buffer size: 50000 us, Priority: low Excess Priority: unspecified Drop profiles: Loss priority Protocol Index Name Low any 41108 DROP-LOW Medium low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 2270 DROP-HIGH Scheduler: TRAFFIC-CLASS-2-SCHEDULER, Forwarding class: PRIORITY-DATA, Index: 34329 Transmit rate: 30 percent, Rate Limit: none, Buffer size: 20000 us, Priority: low Excess Priority: unspecified Drop profiles: Loss priority Protocol Index Name Low any 41108 DROP-LOW Medium low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 2270 DROP-HIGH Scheduler: TRAFFIC-CLASS-3-SCHEDULER, Forwarding class: VOD, Index: 34333 Transmit rate: 45 percent, Rate Limit: none, Buffer size: 10000 us, Priority: low Excess Priority: unspecified Drop profiles: Loss priority Protocol Index Name Low any 1 <default-drop-profile> Medium low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 2270 DROP-HIGH Scheduler: TRAFFIC-CLASS-4-SCHEDULER, Forwarding class: PREMIUM-DATA, Index: 34305 Transmit rate: unspecified, Rate Limit: none, Buffer size: 35000 us, Priority: strict-high Excess Priority: unspecified Drop profiles: Loss priority Protocol Index Name Low any 1 <default-drop-profile> Medium low any 1 <default-drop-profile> Medium high any 1 <default-drop-profile> High any 1 <default-drop-profile> TRAFFIC-CLASS-1-SCHEDULER { transmit-rate percent 20; buffer-size temporal 50k; priority low; drop-profile-map loss-priority low protocol any drop-profile DROP-LOW; drop-profile-map loss-priority high protocol any drop-profile DROP-HIGH; } TRAFFIC-CLASS-2-SCHEDULER { transmit-rate percent 30; buffer-size temporal 20k; priority low; drop-profile-map loss-priority low protocol any drop-profile DROP-LOW; drop-profile-map loss-priority high protocol any drop-profile DROP-HIGH; } TRAFFIC-CLASS-3-SCHEDULER { transmit-rate percent 45; buffer-size temporal 10k; priority low; drop-profile-map loss-priority high protocol any drop-profile DROP-HIGH; } TRAFFIC-CLASS-4-SCHEDULER { buffer-size temporal 35k; priority strict-high; On Fri, Jul 20, 2012 at 10:39 PM, John Neiberger <jneiber...@gmail.com> wrote: > On Fri, Jul 20, 2012 at 3:49 PM, Wayne Tucker <wa...@tuckerlabs.com> wrote: >> Does show interfaces <blah> extensive on the interface between Router A and >> Device A show any drops? IIRC, the default scheduler map does not define >> schedulers for anything other than be and nc - so if you're classifying the >> packets on input then it could be that they're going to a class that has no >> resources on the egress interface. >> >> :w > > This is certainly what is happening. I checked and saw that we're > seeing output drops in queue 1, but based on the reading I did > tonight, it sounds like the default is for 95% of the bandwidth to be > assigned to best effort in queue 0 and 5% is set aside for network > control in queue 3. The fact that we're seeing all those drops in > queue 1 pretty much proves the issues. We have some groups configured > that have the right scheduler map on them. I just need to determine > exactly which group is the right one and apply it to the right > interfaces. > > I haven't had a chance to apply the fix yet, and all of the people who > have access to the end devices for testing are gone for the weekend, > but I wanted to thank everyone for the help on this. I'm pretty new to > Juniper and I (and everyone else looking at this, including JTAC) were > stumped. > > > Thanks again, > John _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp