I'd like to know how Ryan came up with 50 milliseconds of latency for GiGE.
As m0gely points out, GiGE has a latency measured in *micro* seconds
(millionths of a second) in a switched LAN environment.

Hell, 100BaseTX and 10BaseT have latency in the lower microsecond range
(which is obviously sub millisecond).  Here are ping results over 100BaseT
(half-duplex) between 2 of my Debian boxen:

64 bytes from 192.168.100.9: icmp_seq=0 ttl=255 time=0.7 ms
64 bytes from 192.168.100.9: icmp_seq=1 ttl=255 time=0.4 ms
64 bytes from 192.168.100.9: icmp_seq=2 ttl=255 time=0.4 ms
64 bytes from 192.168.100.9: icmp_seq=3 ttl=255 time=0.4 ms
64 bytes from 192.168.100.9: icmp_seq=4 ttl=255 time=0.3 ms
64 bytes from 192.168.100.9: icmp_seq=5 ttl=255 time=0.3 ms

My half-duplex Fast Ethernet LAN pings are 700 microseconds on the high end,
and 300 microseconds on the low end (if my calculations are correct).  And
this is thru an old cheap 16 port Addtron 100TX hub.

m0gely, please clarify that 4.5 microsecond latency.  I assume he's
referring to "user_process to user_process" MPI short message latency over
Myrinet?  Don't confuse this with IP stack ping latency, which is MUCH
higher, even on Myrinet.  The reason MPI is so fast is that it actually
performs a DMA transfer between RAM segments on the nodes.  This is why MPI
short message latency is so low, and it rises as the size of the DMA
transfer increases.  Mosix/OpenMosix do NOT use MPI.  They use standard
TCP/IP.

StanTheMan
TheHardwareFreak
http://www.hardwarefreak.com
rcon admin at:
Beer for Breakfast servers        <http://bfb.bogleg.org/>
   209.41.98.2:27016 (CS multi-map)   209.41.98.2:27015 (DoD)
   209.41.98.2:27017 (CS militia/dust2)            Dallas, TX



> Ryan McCullough wrote:
>
> > its not bandwidth of the gigabit nic, its the latency. Lets
> say it takes
> > 50ms extra to send the data back and forth between the
> master and the
> > slaves, that adds 50ms of latency to your game. Not worth it.
>
> What if you could shave that to 4.5ms? :)
>
> "This network approach is nice because we can use a standard
> PCI slot on
> each processor node, which gives a 4.5-microsecond latency,"
> he said, as
> opposed to 90-µs latency for Gigabit Ethernet.
>
> Source:
> http://www.eet.com/at/news/OEG20021111S0037
>
> --
> - m0gely
> http://quake2.telestream.com/
> Q2 | Q3A | Counter-strike
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to