We've been running into an issue with early tail drops on A9K-8T-L cards and I'm trying to wrap my head around how buffering works on these cards. I get the impression that they don't have dedicated per-interface output queues and instead use some sort of shared buffering mechanism. We have an egress policy-map applied and originally had no queue limit configured. It turns out that this caused the default class to have a 64 KB buffer, which led to a huge number of tail drops earlier than expected since most of the traffic on this link is default class.
We've started to bump up the queue limit value to see if it reduces the tail drops. This traffic is not latency sensitive, so I'm not concerned with increasing the buffer size. I just want to significantly reduce the tail drops we're seeing. We tried a value of 128 KB and that helped a bit. Then we tried 512 KB over the weekend and that seems to get us much closer to the expected result. I think I'm going to bump it up to 768 KB and see how that goes. I don't really understand queueing on this box, though, to be honest. Based on what I've read, it seems that instead of fixed per-interface output buffers, they use virtual output queues. But I'd swear I've read somewhere that VOQs are not related to output buffers like I'm thinking and that they're more related to queues between linecards. Not sure about that one. Can anyone shed some light on this? Many thanks, John _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/