> It actually depends on the number of (transmitting) hosts on the network -
> the more hosts, the lower the max utilisation.
> A network with one main transmitting host can easily get 90% plus
> utilisation (hook-up two machines and do an ftp from one to the other).
> At the upper end ethernet maxes out at about 30% utilisation!

Ahh that is not actually true. This is a very very common misconception
no doubt propergated by the token ring exponents.

Unfortunately I can't find the doco but, we had this argument many years 
ago and our network guru of the time, showed us the results of a test 
using several hundred servers and thick (10Mbit) ethernet. What they in 
fact found was that initially there was a large number of collisions and
then the traffic settled down and the ethernet sustained nearly 90% 
throughput. How is this possible? Firstly each server listens to the
wire before transmitting so collisions only occur when two servers
decide to transmit at exactly the same instant. When a collision 
occurs (and this is the important bit), the servers back off for an
*expotential* random amount of time. This means that each time they
collide the amount of time that they get backed off increases expotentially.
Thus repeated collisions lead to a machines basically being told they
are off the air for a while. So 90% of the time the network is transmitting
useful data and 10% of the time it's telling machines to shutup and stop
talking.

Of course this network would be hell to use, from each servers point of
view the network would only be availiable less than 1% of the time and
unlike token-ring your turn is not guarenteed. However, the point of the
experiment was to show that ethernet still maintains a high utilisation rate 
even when the available bandwidth is orders of magnitude below what is 
required. 

Pete


--
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug

Reply via email to