Paul Lussier writes:
> Dave Johnson <[EMAIL PROTECTED]> writes:
> 
> > Note that as mentioned before, limiting bandwidth and introducing
> > latency and/or jitter are different things.
> >
> > If you want to simulate a bandwidth limited link you need to both
> > limit bandwidth and queue packets.  If you simply drop and don't queue
> > then there is no possibility of latency.
> >
> > Latency and jitter are side effects due to queuing prior to the
> > bandwidth limited hop.  Protocols such as TCP are designed to avoid
> > introducing latency when a slow link is in the path.
> >
> > Anyway, onto the implementation.
> >
> > Below script limits bandwidth in both directions when forwarding
> > through two interfaces.  Note you'll need to setup the appropriate
> > interfaces and routes.  Each side has it's own bandwidth and queue
> > with a max size in bytes.  This is equivilant to a full-duplex T1
> > pipe using a linux box and 2 ethernet interfaces representing the two
> > endpoints of the T1.
> 
> Dave, does this add jitter as well?  I assume that since you're
> queueing, it does to some extent.  Would it also be wise to inject
> traffic on the simulated T1 connection by having other hosts
> communicating as well?  For example I could have 1 linux system in
> between 2 switches, on which were several other systems all
> communicating with each other doing things like generating web
> requests, copying large files, etc.
> 
> It seems this approach would add to the randomness of the connection
> to some extent, and increase both the latency and jitter experienced
> by the 2 systems we're really trying to test.
> 
> Thanks again!
> 

Latency and jitter due to queuing will be accurate, however the only
thing this won't do is introduce inherant latency due to link speed,
distance, etc...

Sending 1500 bytes over a T1 takes 7.8ms plus forwarding delays,
distance, etc...  If you chain multiple connections together you've
got even more.  The TC setup will forward each packet as fast as
possible, just introduce delays between them to simulate a bandwidth
limit.

In any case if you want a 'real-world' experiance yuo'll need to send
other stuff over the link at the same time as your test.  TCP
connections are easy. If you want to bring the link to a crawl you can
use my UDP network test tool to send packets over the link at any rate:

http://davej.org/programs/untt/

# server listening on port 10000
./untt -l -v -p 10000

# client sending 400 byte packets at 4mbps (uni-directional)
./untt -vv -p 10000 -s 400 -r 4000 -c 1000000 192.168.11.2


The more bursts the more jitter, the more input the more latency.

Also note the queue size will directly relate to the max latency
because bandwidth is fixed.  The 256KB gives a max latency of
1.333 seconds each direction.

Flooding the link with a constant 4mbps above fills one of the queues
in just seconds and we reach max latency in no time:

64 bytes from 192.168.11.2: icmp_seq=161 ttl=63 time=1350 ms
64 bytes from 192.168.11.2: icmp_seq=162 ttl=63 time=1345 ms
64 bytes from 192.168.11.2: icmp_seq=163 ttl=63 time=1353 ms
64 bytes from 192.168.11.2: icmp_seq=164 ttl=63 time=1348 ms
64 bytes from 192.168.11.2: icmp_seq=165 ttl=63 time=1349 ms
64 bytes from 192.168.11.2: icmp_seq=166 ttl=63 time=1356 ms
64 bytes from 192.168.11.2: icmp_seq=167 ttl=63 time=1350 ms
64 bytes from 192.168.11.2: icmp_seq=168 ttl=63 time=1350 ms


-- 
Dave

_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss

Reply via email to