On (2012-03-16 14:20 -0300), Gomes, Joao (NSN - US/Mountain View) wrote: > I found in documentation that MX MPC Buffer size is 100ms of the port > speed. > > Is the buffer size the same for MPC1, MPC1Q, MPC2, MPC2Q and MPC2EQ?
No. The buffer size is never temporally limited, or at least I've not seen it. I've seen excess of 4s queueing delays when running subrate speeds in 10GE (If I've not explicitly guaranteed that allocated buffer space is not rest) # show qxchip 0 driver QX-chip : 0 rldram_size : 603979776 This seems to imply, you have 512MB ECC memory per QX in MX80 and MPCxQ, per trio. So MPC2Q has 1GB. In non QX (Q, EQ) you obviously have lot less memory. You're free to assign this as you wish to different classes in different interfaces inside that trio. I don't have EQ card to test, if it has more memory, but I'd expect so. > We want to shape the traffic per VLAN and need to determine the buffer > size allocated to each of it. You can use 'show cos halp ifl X' and 'show qx tail-rule Y 0 0' to see how many bytes of buffers have been assigned to class. When configuring schedulers buffer-size I would not use 'percent' or 'remainder' ever, but I would always use 'temporal'. As with temporal you don't need to be acutely aware how large the pool is, you can more directly ensure that the allocated byte size is what you want. main_rate * transmit-rate percent * temporal = bytes So if you apply QoS to 10G interface (no subrate shaping) and give BE 10%, then your class rate is 1Gbps. And if you configure temporal 10k, that means you'll have approximately 1Gbps * 10ms bytes worth of buffer. (If your actual offered rate in BE is 2Gbps, you'll have just 5ms worth of buffer, even though you configured 10ms temporal) -- ++ytti _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp