Hello Jonathan,

Thanks for your feedback.

As suggested, we changed the NIC buffer size to 1 packet for the simulation
and also tried these different buffer sizes: 10, 50 and 75.

The default NIC buffer size in ns-3 is 100 packets.

Additionally, we also enabled BQL and tried.

We see that the link utilization gets significantly affected when we keep
the NIC buffer size small.

The results are put up on the following link:

https://github.com/Daipu/COBALT/wiki/Link-Utilization-Graphs-with-Different-NetDeviceQueue-size

Thanks and Regards,
Jendaipou Palmei
Shefali Gupta

On Sun, Dec 9, 2018 at 6:51 PM Jonathan Morton <chromati...@gmail.com>
wrote:

> > On 9 Dec, 2018, at 10:37 am, Jendaipou Palmei <jendaipoupal...@gmail.com>
> wrote:
> >
> > By hidden queues, do you mean the NIC buffers? ns-3 has a Linux-like
> traffic control wherein the packets dequeued by a queue discipline are
> enqueued into NIC buffer.
>
> That's right.  Linux now uses BQL, which (given compatible NIC drivers)
> limits the number of packets in the NIC buffers to a very small value -
> much smaller than is evident from your data.  If you were to measure the
> end-to-end RTT of each, I'm certain you would see this effect dominating
> the mere 50ms latency you're trying to model.
>
> Ideally, AQM would applied to the hardware queue anyway.  For simulation
> purposes, I recommend reducing the NIC buffer on the bottleneck interface
> to 1 packet, so that the simulated AQM's effects are easiest to measure.
>
>  - Jonathan Morton
>
>
_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to